The Future of AI: Ordering Pizza

August 25, 2016

I love the examples trotted out by real journalists when reporting about artificial intelligence. Why define the term when one can bite into such morsels as:

you want to go to eat a pizza with certain characteristic in a place you love, you want to make a reservation. So from your car with your display, you can reserve your place in the restaurant and ask for the menu and choose your pizza. When you are near the restaurant, the city gives you information to your car, where the nearest garage is, and where you have more places and the price. You can do this all via search. This is the real revolution.”

The source of this recycled nugget from the days of Scott McNealy and his sound bites is “The Future of Search: Start-ups Look to Take on Google with Artificial Intelligence.”

The write up profiles several companies pointing the way to future of artificial intelligence and maybe, just maybe, a challenge to the Alphabet Google thing. Excited yet. I am. Definitely.

What drives the entrepreneurs at Twiggle and FacilityLive and other AI start ups? Easy. With Dan Grigsby’s departure from Deep Machine, greener pastures seem to be an allure.

As I said, “Excited.” Sort of.

Stephen E Arnold, August 25, 2016

Real Time Data Analysis for Almost Anyone

August 25, 2016

The idea that Everyman can tap into a real time data stream and perform “analyses” is like catnip for some. The concept appeals to those in the financial sector, but these folks often have money (yours and mine) to burn. The idea seems to snag the attention of some folks in the intelligence sector who want to “make sense” out of Twitter streams and similar flows of “social media.” In my experience, big outfits with a need to tap into data streams have motivation and resources. Most of those who fit into my pigeonhole have their own vendors, systems, and methods in place.

The question is, “Does Tom’s Trucking need to tap into real time data flows to make decisions about what paint to stock or what marketing pitch to use on the business card taped to the local grocery’s announcement board?”

I plucked from my almost real time Web information service (Overflight) two articles suggesting that there is money in “them thar hills” of data.

The first is “New Amazon Service Uses SQL To Query Streaming Big Data.” Amazon is a leader in the cloud space. The company may not be number one on the Gartner hit parade, but some of those with whom I converse believe that Amazon continues to be the cloud vendor to consider and maybe use. The digital Wal-Mart has demonstrated both revenue and innovation with its cloud business.

The article explains that Amazon has picked  up the threads of Hadoop, SQL, and assorted enabling technologies and woven Amazon Kinesis Analytics. The idea is that Amazon delivers a piping hot Big Data pizza via a SQL query. The write up quotes an Amazon wizard as saying:

“Being able to continuously query and gain insights from this information in real-time — as it arrives — can allow companies to respond more quickly to business and customer needs,” AWS said in a statement. “However, existing data processing and analytics solutions aren’t able to continuously process this ‘fast moving’ data, so customers have had to develop streaming data processing applications — which can take months to build and fine-tune — and invest in infrastructure to handle high-speed, high-volume data streams that might include tens of millions of events per hour.”

Additional details appear in Amazon’s blog post here. The idea is that anyone with some knowledge of things Amazon, coding expertise, and a Big Data stream can use the Amazon service.

The second write up is “Microsoft Power BI Dashboards Deliver Real Time Data.” The idea seems to be that Microsoft is in the real time data analysis poker game as well. The write up reveals:

Power BI’s real-time dashboards — known as Real-Time Dashboard tiles — builds on the earlier Power BI REST APIs release to create real-time tiles within minutes. The tiles push data to the Power BI REST APIs from streams of data created in PubNub, a real-time data streaming service currently used widely for building web, mobile and IoT applications.

The idea is that a person knows the Microsoft methods, codes the Microsoft way, and has a stream of Big Data. The user then examines the outputs via “tiles.” These are updated in real time. As mentioned above, Microsoft is the Big Data Big Dog in the Gartner kennel. Obviously Microsoft will be price competitive with the service prices at about $10 per month. The original price was about $40 a month, but the cost cutting fever is raging in Redmond.

The question is, “Which of these services will dominate?” Who knows? Amazon has a business and a real time pitch which makes sense to those who have come to depend on the AWS services. Microsoft has business customers, Windows 10, and a reseller/consulting community eager to generate revenue.

My thought is, “Pick your horse, put down your bet, and head to the Real Time Data Analytics race track.” Tomorrow’s $100 ticket is only a few bucks today. The race to low cost entry fees is about to begin.

Stephen E Arnold, August 25, 2016

Computers Will Talk Pretty One Day Soon with NLP

August 25, 2016

The article titled National Language Processing: Turning Words Into Data on B2C takes an in-depth look at NLP and why it is such a difficult area to perfect. Anyone who has conversed with an automated customer service system knows that NLP technology is far from ideal. Why is this? The article suggests that while computers are great at learning the basic rules of language, things get far more complex when you throw in context-dependent or ambiguous language, not to mention human error. The article explains,

“This has changed with the advent of machine learning…In the case of NLP, using a real-world data set lets the computer and machine learning expert create algorithms that better capture how language is actually used in the real world, rather than on how the rules of syntax and grammar say it should be used. This allows computers to devise more sophisticated—and more accurate—models than would be possible solely using a static set of instructions from human developers.”

Throw in Big Data and we have a treasure trove of unstructured data to glean value from in the form of text messages, emails, and social media. The article lists several exciting applications such as automatic translation, automatic summarization, Natural Language Generation, and sentiment analysis.

Chelsea Kerwin, August 25, 2016

Truth or Fiction: US Army Cannot Count Money

August 24, 2016

I believe everything I read on the Internet. When the information comes from a real journalism type outfit, I am no Doubting Thomas. I wish to point out that the write up “US Army Fudged Its Accounts by Trillions of Dollars, Auditor Finds” strikes me as fiction. Just to keep the math straight, here’s a summary of numbers:

  • 1,000 is one thousand
  • 10,000 is ten one thousands
  • 100,000 is ten ten thousands
  • Let’s jump up a bit.
  • One million is 1,000,000
  • A billion is 1,000,000,000
  • A trillion is 1,000,000,000,000.

In Zimbabwe there was a $10 trillion dollar bill. So misplacing a bill is easy to do:

image

My recollection from my days at Booz, Allen is that most humanoids have difficult with quantities over 1,000. Imagine what happens when one has to think about trillions or a one followed by 12 zeros.

image

If the write up is on the money, the US Army is composed of individuals who cannot deal with big numbers or money. I learned:

The Defense Department’s Inspector General, in a June report, said the Army made $2.8 trillion in wrongful adjustments to accounting entries in one quarter alone in 2015, and $6.5 trillion for the year. Yet the Army lacked receipts and invoices to support those numbers or simply made them up.

How can a US federal entity make up numbers? The Department of Defense is into Windows and Excel. The US Army has a fancy data aggregation and analysis system called Distributed Common Ground or DCGS-A.

The write up stated:

The report affirms a 2013 Reuters series revealing how the Defense Department falsified accounting on a large scale as it scrambled to close its books. As a result, there has been no way to know how the Defense Department – far and away the biggest chunk of Congress’ annual budget – spends the public’s money. The new report focused on the Army’s General Fund, the bigger of its two main accounts, with assets of $282.6 billion in 2015. The Army lost or didn’t keep required data, and much of the data it had was inaccurate, the IG said.

I was surprised an auditor was able to assemble the needed information. I highlighted this statement from the source article:

The IG report also blamed DFAS [Defense Finance and Accounting Service] , saying it too made unjustified changes to numbers. For example, two DFAS computer systems showed different values of supplies for missiles and ammunition, the report noted – but rather than solving the disparity, DFAS personnel inserted a false “correction” to make the numbers match. DFAS also could not make accurate year-end Army financial statements because more than 16,000 financial data files had vanished from its computer system. Faulty computer programming and employees’ inability to detect the flaw were at fault, the IG said.

Trillions. Hmmm. Why not put DCGS-A on the forensic team? If that system does not work, why not let Palantir Gotham have a go at figuring out where the money went? Another option is IBM i2 Analyst’s Notebook, right?

Yes, government integrity. There’s a Web site for that too: https://www.oge.gov/.

Did you know that?

Stephen E Arnold, August 24, 2016

Russia Versus Alphabet Google: Mr. Putin May Use an iPhone

August 24, 2016

I read “Out-Of-Court Settlement Between Google & Russia Won’t Happen.” I assume the write up is accurate because everything on the Internet is true blue. The Alphabet Google thing has been jousting with a mere nation state over its approach to Android’s market methods.

Alphabet Google tried for an out of court settlement to negotiate the matter. Whipping out the checkbook is one part of the Alphabet Google business strategy when nation states become too big for their britches.

According to the write up:

In this case, the issue is that Google’s licensing rules require manufacturers to include a number of Google applications should they wish to install and use Android, the open-source operating system, on their smartphones and tablets. Google’s Russian competitor, Yandex, complained to the authorities in 2014 that Google was forcing manufacturers to both include the Google Search and other services along with the Google Play Store on Android-powered devices, but also that Google blocked manufacturers from installing competitor services.

Short summary: Bad, bad Alphabet Google. The fine for this flaunting of Russian laws is around US$6.5 billion. Russia seems to want cash and the Alphabet Google matter to go away for a short time.

I do not understand why mere nation states like Russia cannot get with the Alphabet Google program. Is the new Alphabet Google going to impose trade restrictions on Russia? Will Alphabet Google accuse Russia of violating human rights because companies are people too? Will Alphabet Google ask Android users to protest in front of the FSB office in Moscow? Does Mr. Putin use an iPhone?

So many questions.

Stephen E Arnold, August 24, 2016

Microsoft Considers next Generation Artificial Intelligence

August 24, 2016

While science fiction portrays artificial intelligence in novel and far-reaching ways, certain products utilizing artificial intelligence are already in existence. WinBeta released a story, Microsoft exec at London conference: AI will “change everything”, which reminds us of this. Digital assistants like Cortana and Siri are one example of how mundane AI can appear. However, during a recent AI conference, Microsoft UK’s chief envisioning officer Dave Choplin projected much more impactful applications. This article summarizes the landscape of concerns,

Of course, many also are suspect about the promise of artificial intelligence        and worry about its impact on everyday life or even its misuse by malevolent actors. Stephen Hawking has worried AI could be an existential threat and Tesla CEO Elon Musk has gone on to create an open source AI after worrying about its misuse. In his statements, Choplin also stressed that as  more and more companies try to create AI, ‘We’ve got to start to make some   decisions about whether the right people are making these algorithms.

There is much to consider in regards to artificial intelligence. However, such a statement about “the right people” cannot stop there. Choplin goes on to refer to the biases of people creating algorithms and the companies they work for. Because organizational structures must also be considered, so too must their motivator: the economy. Perhaps machine learning to understand the best way to approach AI would be a good first application.

Megan Feil, August 24, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Can Analytics Be Cloud Friendly?

August 24, 2016

One of the problems with storing data in the cloud is that it is difficult to run analytics.  Sure, you can run tests to determine the usage of the cloud, but analyzing the data stored in the cloud is another story.  Program developers have been trying to find a solution to this problem and the open source community has developed some software that might be the ticket.  Ideata wrote about the newest Apache software in “Apache Spark-Comparing RDD, Dataframe, and Dataset.”

Ideata is a data software company and they built many of the headlining products on the open source software Apache Spark.  They have been using Apache Spark since 2013 and enjoy using it because it offers a rich abstraction, allows the developer to build complex workflows, and perform easy data analysis.

Apache Spark works like this:

Spark revolves around the concept of a resilient distributed dataset (RDD), which is a fault-tolerant collection of elements that can be operated on in parallel. An RDD is Spark’s representation of a set of data, spread across multiple machines in the cluster, with API to let you act on it. An RDD could come from any datasource, e.g. text files, a database via JDBC, etc. and can easily handle data with no predefined structure.

It can be used as the basis fort a user-friendly cloud analytics platform, especially if you are familiar with what can go wrong with a dataset.

Whitney Grace, August 24, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Bot Landscape Includes Search

August 23, 2016

Search and retrieval technology finds a place in a “bot landscape.” The collection of icons appears in “Introducing the Bots Landscape: 170+ Companies, $4 Billion in Funding, Thousands of Bots.” The diagram of the bots landscape in the write up is, for me, impossible to read. I admit it does convey the impression of a lot of a bots. The high resolution version was also difficult for me to read. You can download a copy and take a gander yourself at this link. But there is a super high resolution version available for which one must provide a name and an email. Then one goes through a verification step. Clever marketing? Well, annoying to me. The download process required three additional clicks. Here it is. A sight for young eyes.

image

I was able to discern a reference to search and retrieval technology in the category labeled “AI Tools: Natural Language Processing, Machine Learning, Speech & Voice Recognition.” I was able to identity the logo of Fair Issacs and the mark of Zorro, but the other logos were unreadable by my 72 year old eyes.

The graphic includes these bot-agories too:

  1. Bots with traction
  2. Connectors and shared services
  3. Bot discover
  4. Bot developer frameworks and tools
  5. Analytics
  6. Messaging.

The bot landscape is rich and varied. MBAs and mavens are resourceful and gifted specialists in classification. The fact that the categories are, well, a little muddled is less important than finding a way to round up so many companies worth so much money.

Stephen E Arnold, August 23, 2016

No More Data Mining for Intelligence

August 23, 2016

The U.S. intelligence community will no longer receive information from Dataminr, which serves as a Twitter “fire hose” (Twitter owns five percent of Dataminr). An article, Twitter Turns Off Fire Hose For Intelligence Community from ThreatPost offers the story. A Twitter spokesperson stated they have had a longstanding policy against selling data for surveillance. However, the Journal reported their arrangement was terminated after a CIA test program concluded. The article continues,

Dataminr is the only company allowed to sell data culled from the Twitter fire hose. It mines Tweets and correlates that data with location data and other sources, and fires off alerts to subscribers of breaking news. Reportedly, Dataminr subscribers knew about the recent terror attacks in Brussels and Paris before mainstream media had reported the news. The Journal said its inside the intelligence community said the government isn’t pleased with the decision and hopes to convince Twitter to reconsider.

User data shared on social media has such a myriad of potential applications for business, law enforcement, education, journalism and countless other sectors. This story highlights how applications for journalism may be better received than applications for government intelligence. This is something worth noticing.

Megan Feil, August 23, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden /Dark Web meet up on August 23, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233019199/

Another Robot Finds a Library Home

August 23, 2016

Job automation has its benefits and downsides.  Some of the benefits are that it frees workers up to take on other tasks, cost-effectiveness, efficiency, and quicker turn around.  The downside is that it could take jobs and could take out the human factor in customer service.   When it comes to libraries, automation and books/research appear to be the antithesis of each other.  Automation, better known as robots, is invading libraries once again and people are up in arms that librarians are going to be replaced.

ArchImag.com shares the story “Robot Librarians Invade Libraries In Singapore” about how the A*Star Research library uses a robot to shelf read.  If you are unfamiliar with library lingo, shelf reading means scanning the shelves to make sure all the books are in their proper order.  The shelf reading robot has been dubbed AuRoSS.  During the night AuRoSS scans books’ RFID tags, then generates a report about misplaced items.  Humans are still needed to put materials back in order.

The fear, however, is that robots can fulfill the same role as a librarian.  Attach a few robotic arms to AuRoSS and it could place the books in the proper places by itself.  There already is a robot named Hugh answering reference questions:

New technologies thus seem to storm the libraries. Recall that one of the first librarian robots, Hugh could officially take his position at the university library in Aberystwyth, Wales, at the beginning of September 2016. Designed to meet the oral requests by students, he can tell them where the desired book is stored or show them on any shelf are the books on the topic that interests them.

It is going to happen.  Robots are going to take over the tasks of some current jobs.  Professional research and public libraries, however, will still need someone to teach people the proper way to use materials and find resources.  It is not as easy as one would think.

Whitney Grace, August 23, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden /Dark Web meet up on August 23, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233019199/

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta