GAO DCGS Letter B-412746

June 1, 2016

A few days ago, I stumbled upon a copy of a letter from the GAO concerning Palantir Technologies dated May 18, 2016. The letter became available to me a few days after the 18th, and the US holiday probably limited circulation of the document. The letter is from the US Government Accountability Office and signed by Susan A. Poling, general counsel. There are eight recipients, some from Palantir, some from the US Army, and two in the GAO.

palantir checkmate

Has the US Army put Palantir in an untenable spot? Is there a deus ex machina about to resolve the apparent checkmate?

The letter tells Palantir Technologies that its protest of the DCGS Increment 2 award to another contractor is denied. I don’t want to revisit the history or the details as I understand them of the DCGS project. (DCGS, pronounced “dsigs”, is a US government information fusion project associated with the US Army but seemingly applicable to other Department of Defense entities like the Air Force and the Navy.)

The passage in the letter I found interesting was:

While the market research revealed that commercial items were available to meet some of the DCGS-A2 requirements, the agency concluded that there was no commercial solution that could  meet all the requirements of DCGS-A2. As the agency explained in its report, the DCGS-A2 contractor will need to do a great deal of development and integration work, which will include importing capabilities from DCGS-A1 and designing mature interfaces for them. Because  the agency concluded that significant portions of the anticipated DCSG-A2 scope of work were not available as a commercial product, the agency determined that the DCGS-A2 development effort could not be procured as a commercial product under FAR part 12 procedures. The protester has failed to show that the agency’s determination in this regard was unreasonable.

The “importing” point is a big deal. I find it difficult to imagine that IBM i2 engineers will be eager to permit the Palantir Gotham system to work like one happy family. The importation and manipulation of i2 data in a third party system is more difficult than opening an RTF file in Word in my experience. My recollection is that the unfortunate i2-Palantir legal matter was, in part, related to figuring out how to deal with ANB files. (ANB is i2 shorthand for Analysts Notebook’s file format, a somewhat complex and closely-held construct.)

Net net: Palantir Technologies will not be the dog wagging the tail of IBM i2 and a number of other major US government integrators. The good news is that there will be quite a bit of work available for firms able to support the prime contractors and the vendors eligible and selected to provide for-fee products and services.

Was this a shoot-from-the-hip decision to deny Palantir’s objection to the award? No. I believe the FAR procurement guidelines and the content of the statement of work provided the framework for the decision. However, context is important as are past experiences and perceptions of vendors in the running for substantive US government programs.

Read more

Hewlett Packard Enterprise: Cut It Up and Sell Off the Parts

May 28, 2016

Someone called me to alert me to Hewlett Packard Enterprise was doing the mitosis approach to financial goodness. As you recall, gentle reader, Hewlett Packard chopped itself in half, emulating Solomon’s approach to shared custody. One part was printers and ink. The other part was everything not part of the printers and ink deal.

The resulting non ink outfit was dubbed Hewlett Packard Enterprise. The solution to HP’s revenue problems was to create two companies, make bankers happy, and ponder what to do next. The answer according to “Hewlett Packard Enterprise Surges on Move to Merge Services Unit with CSC,” is to create an HP outfit and a spinoff/merger deal.

The write up states:

The union will create a “a pure-play, global IT services powerhouse,” said HP Enterprise in a statement.

The HPE entity will sell hardware. The HP-CSC entity which seems to be called Spinco. Spinco suggests spin off or spin out and reminds me of PR spin. HPE is now free to become a big dog because the annoying little puppies like printers and ink and the thrilling EDS operation are at a minimum an arm’s length away.

I recall a series of MBA type paragraphs published by ZDNet. Hey, a listicle dragged out over six weeks is ideal for the mobile phone researcher. Navigate to ”Worst Tech Mergers and Acquisitions.” Number one with a bullet was HP and Compaq. HP also made the list at Number four with its purchase of Autonomy. Not bad 40 percent of the top five worst deals of all time in the eyes of the really expert ZDNet researchers.

I once tracked Autonomy closely. I have included information about IDOL in the forthcoming Palantir Notebook we are finalizing. In the last couple of years, Autonomy faded from my radar. Obviously it is not a giant blip on the HPE control room either.

Several questions/observations are warranted:

  • Is it now time for the top brass at HPE to withdraw from the field of battle now that the corporate aircraft carrier has been refitted and once again sea worthy?
  • What happens to those luck licensees of various Autonomy technologies?
  • Will HPE continue to grow its revenues and once again hit the $100 billion in revenue mark?
  • Will People Magazine cover the party the legal eagles, accountants, and financial institutions which worked on the deal will hold at the La Quinta in South San Francisco?

From my vantage point in Harrod’s Creek, Kentucky, I am not sure that the newly painted HPE will be able to match the performance of other, more modern money machines.

Stephen E Arnold, May 28, 2016

Quotes to Note: The Thiel-Hulk Matter

May 26, 2016

The downsizing New York Times is channeling the Gawker thing. I read “Tech Billionaire in a Secret War with Gawker.” [Note: You may or may not be able to view this. Speak to the Gray Lady, not me.] The billionaire is Peter Thiel, a founder of PayPal and a number of other high profile and wildly successful companies. He is, I learned, a member of the PayPal mafia. Who knew?

image

I was not sure what a “demigod” was. I turned to Google. The first hit is this illustration apparently from a video game. Who knew?

I am not interested in the news story about a person who wants to fight for truth, justice, and the Silicon Valley way. I am not sure who Hulk Hogan is. That’s okay. The write up contained some quotes to note. I don’t want to lose track of these. I might want to spice up a report or a lecture with these allegedly accurate statements made by a powerful, rich wizard. Here you go:

  1. The story is not a story. It is a “bizarre and astounding back story.” [The New York Times] I once read similar headlines in the IGA store waiting for a human to check out my toothpaste and sparkling water purchases. Who published stories with these words? I think it was the National Enquirer.
  2. “I refuse to believe that journalism means massive privacy violations.”—Peter Thiel
  3. “We wanted flying cars, instead we got 140 characters,” is the Founders Fund tag line.—The New York Times quoting a Web site.

Great stuff. I wonder how Palantir Technologies, a company founded by Mr. Thiel, who is characterized as having “demigod status”, about the leaks to Buzzfeed. Should that reporter be concerned about legal action? I hope not.

Stephen E Arnold, May 26, 2016

MarkLogic Tells a Good Story

May 25, 2016

I lost track of MarkLogic when the company hit about $51 million in revenue and changed CEOs in 2006. In 2012, another CEO changed took place Since Gary Bloom, a former Oracle executive took over, the company, according to “Gary Bloom Interview: Big Data Driving Sales Boom at MarkLogic,” the company is now “topping” $100 million in annual revenue.

MarkLogic is one of the outfits laboring in the DCGX / DI2E vineyard. The company may be butting heads with outfits like Palantir Technologies as the US Army’s plan to federate its systems and data move forward.

MarkLogic opened for business in 2003 and has ingested, according to Crunchbase, $175 million in venture funding. With a timeline equivalent to Palantir Technologies’, there may be some value in comparing these two “startups” and their performance. That is an exercise better left to the feisty young MBAs who have to produce a return for the Sequoia and Wellington experts.

The interview contained two interesting statements which I found surprising:

The driver is Big Data: large corporations are convinced there is an El Dorado of untapped commercial opportunities — if only they can run their reports across all their data sources. But integrating all that data is too costly, and takes too long with relational databases. The future will be full of data in many forms, formats, and sources and how that data is used will be the differentiator in many competitive battles. If that data can’t be searched it can’t be used.

That is indeed the belief and the challenge. Based on what I have learned via open sources about the DCGS project, the reality is different from the “all” notions which fill the heads of some of the vendors delivering a comprehensive intelligence system to US government clients. In fact, the reality today seems to me to be similar to the hope for the Convera system when it was doing the “all” approach to some US government information. That, as you may recall, did not work out as some had hoped.

The second statement I highlighted is:

Although MarkLogic is tiny compared to Oracle there are some interesting parallels. “MarkLogic is at about the same size as Oracle was when I began working there. It took a long time for Oracle to get security and other enterprise features right, but when it did, that was when company really took off.”

The stakeholders hope that MarkLogic does “take off.” With more than 12 years of performance history under its belt, MarkLogic could be the next big thing. The only hitch in the git along is that normalization of information and data have to take place. Then there is the challenge of the query language. One cannot overlook the competitors which continue to bedevil those in the data management game.

With Oracle also involved in some US government work, there might be a bit of push back as the future of MarkLogic rolls forward. What happens if IBM’s data management systems group decide to acquire MarkLogic? Excitement? Perhaps.

Stephen E Arnold, May 25, 2016

Big Data and Value

May 19, 2016

I read “The Real Lesson for Data Science That is Demonstrated by Palantir’s Struggles · Simply Statistics.” I love write ups that plunk the word statistics near simple.

Here’s the passage I highlighted in money green:

… What is the value of data analysis?, and secondarily, how do you communicate that value?

I want to step away from the Palantir Technologies’ example and consider a broader spectrum of outfits tossing around the jargon “big data,” “analytics,” and synonyms for smart software. One doesn’t communicate value. One finds a person who needs a solution and crafts the message to close the deal.

When a company and its perceived technology catches the attention of allegedly informed buyers, a bandwagon effort kicks in. Talks inside an organization leads to mentions in internal meetings. The vendor whose products and services are the subject of these comments begins to hint at bigger and better things at conferences. Then a real journalist may catch a scent of “something happening” and writes an article. Technical talks at niche conferences generate wonky articles usually without dates or footnotes which make sense to someone without access to commercial databases. If a social media breeze whips up the smoldering interest, then a fire breaks out.

A start up should be so clever, lucky, or tactically gifted to pull off this type of wildfire. But when it happens, big money chases the outfit. Once money flows, the company and its products and services become real.

The problem with companies processing a range of data is that there are some friction inducing processes that are tough to coat with Teflon. These include:

  1. Taking different types of data, normalizing it, indexing it in a meaningful manner, and creating metadata which is accurate and timely
  2. Converting numerical recipes, many with built in threshold settings and chains of calculations, into marching band order able to produce recognizable outputs.
  3. Figuring out how to provide an infrastructure that can sort of keep pace with the flows of new data and the updates/corrections to the already processed data.
  4. Generating outputs that people in a hurry or in a hot zone can use to positive effect; for example, in a war zone, not get killed when the visualization is not spot on.

The write up focuses on a single company and its alleged problems. That’s okay, but it understates the problem. Most content processing companies run out of revenue steam. The reason is that the licensees or customers want the systems to work better, faster, and more cheaply than predecessor or incumbent systems.

The vast majority of search and content processing systems are flawed, expensive to set up and maintain, and really difficult to use in a way that produces high reliability outputs over time. I would suggest that the problem bedevils a number of companies.

Some of those struggling with these issues are big names. Others are much smaller firms. What’s interesting to me is that the trajectory content processing companies follow is a well worn path. One can read about Autonomy, Convera, Endeca, Fast Search & Transfer, Verity, and dozens of other outfits and discern what’s going to happen. Here’s a summary for those who don’t want to work through the case studies on my Xenky intel site:

Stage 1: Early struggles and wild and crazy efforts to get big name clients

Stage 2: Making promises that are difficult to implement but which are essential to capture customers looking actively for a silver bullet

Stage 3: Frantic building and deployment accompanied with heroic exertions to keep the customers happy

Stage 4: Closing as many deals as possible either for additional financing or for licensing/consulting deals

Stage 5: The early customers start grousing and the momentum slows

Stage 6: Sell off the company or shut down like Delphes, Entopia, Siderean Software and dozens of others.

The problem is not technology, math, or Big Data. The force which undermines these types of outfits is the difficulty of making sense out of words and numbers. In my experience, the task is a very difficult one for humans and for software. Humans want to golf, cruise Facebook, emulate Amazon Echo, or like water find the path of least resistance.

Making sense out of information when someone is lobbing mortars at one is a problem which technology can only solve in a haphazard manner. Hope springs eternal and managers are known to buy or license a solution in the hopes that my view of the content processing world is dead wrong.

So far I am on the beam. Content processing requires time, humans, and a range of flawed tools which must be used by a person with old fashioned human thought processes and procedures.

Value is in the eye of the beholder, not in zeros and ones.

Stephen E Arnold, May 19, 2016

Listen Up. Hear and Know Enables Information Access in an Innovative Way

May 18, 2016

Improbable as it sounds I found myself a short distance from the offices once housing the Exalead search company. Once I used Google Maps to find my way from Opéra to the Rue Royale where Exalead had its office. GPS did not do the job. Exalead was located next to a food shop behind intrepid Parisians who parked their Smart Cars, bicycles, and motos on the sidewalk.

On this trip to Paris I was going to learn about a company with technology that performed some GPS type functions without GPS.

In addition to tracking hardware and firmware, the company called Hear and Know has a database system which sends out emails and SMS alerts to inform the team tracking  an object of interest  exactly where that said object is in real time. Based on my concerns about the precision of GPS centric systems, I wanted to understand the Hear and Know approach. (Yes, “hear” refers to the company’s approach to capturing audio.)

Instead of search, the company Hear and Know developed systems and methods to have information flow directly to a person who needs to know who, what, where, and when events take place. This is practical, real time, and actionable information. None of that keyword search and fuzzy geo-location implementation.

Like Google, Exalead was anchored in the world of Alta Vista, Hotbot, and Lycos. A failure to recognized the impact of mobility, pervasive connectivity, and an insatiable appetite for gizmos or firmware that leapfrog the keyword approach locked the door on traditional search. At the same time, mobile and wireless kicked open the door to new ways of thinking about information. Here and now, real time, flows, and the potential of embedding smart technology in miniaturized components.

Times change.

On the dot, Jean Philippe Lelièvre, founder of Hear and Know, walked in the door of my so-so hotel not far from the Madeleine metro stop in Paris. M. Lelièvre sat down, ordered a Badoit, and reminded me that he and I had met at a conference in a country soon to be named “Czechia.

With my studied Kentucky suaveness, I asked: “What’s up?”

The answer was that Lelièvre’s company continues to attract customers from government sectors as well as commercial operations. Hear and Know works in the technical space described as “radio solutions for traceability and security.” Founded in 2012, Hear and Know tackled the problem of imprecise location of objects like cargo or persons of interest. GPS is okay for finding one’s way to Opéra from Madeleine to the Sorbonne. For many information tasks more precise geo-location coordinates are necessary. Examples range from tracking shipments of nuclear material, persons of interest, individual packages within containers, fire and rescue, and myriad other use cases. GPS is okay, just not as precise as many assume.

The company’s technology combines a miniature radio transmitter which fulfills requirements of traceability, geolocation, and secure data transmissions by authentication and encryption. The system transmits its ID. The “tag” allows the user to find the asset, the vehicle, the person or the package on which the miniaturized component is attached. The firm’s engineers have designed the device to perform other functions; for example, temperature, pressure, and audio. What makes the hardware interesting is that a Hear and Know device can function as what Lelièvre calls an “effector.” I interpreted the concept as making a Hear and Know device function as an “alarm” or a signaling device for another hardware or software system.

In addition to tracking hardware and firmware, the company called Hear and Know has a database system which sends out emails and SMS alerts to inform the team tracking  an object of interest  exactly where that said object is in real time. Based on my concerns about the precision of GPS centric systems, I wanted to understand the Hear and Know approach. (Yes, “hear” refers to the company’s approach to capturing audio.)

In my talk with Lelièvre we did not discuss military applications of the company’s technology. During my flight from Paris to Kentucky, I thought about the value of embedding Lelièvre’s devices into weapon systems. If those weapon systems find themselves “out of bounds,” the devices can activate a disabling mechanism of some type. A smart weapon that becomes stupid without the intervention of a human struck me as an application worth moving to a prototype.

Lelièvre described a use case in which Hear and Know’s radios are deployed for a person of interest. The locations and other details flow into the Hear and Know data center and allow an investigator to formulate a statement of fact along the lines:

John Doe was on MM/DD/2016 at HOUR:MINUTE at the address LATITUDE/LONGITUDE.

Another application is the use of the Hear and Know devices to monitor individuals with a medical condition; for example, people with Lyme disease allows the family to know the family member’s location and support them if help is needed.

These data can be displayed on a map in the same way Geofeedia presents tweets or Palantir shows the location of improvised explosive devices. The difference is that Hear and Know provides:

  • Nearly undetectable radio form factors
  • Adjustable transmission frequencies
  • Multi-month operational autonomy
  • Email and SMS alerts about location of tracked object or person.

Hear and Know has remarkable technology. At this time, the company is best known in Europe. It customers include:

  • Atos
  • BPIFrance
  • Esiglec
  • Mov’eo
  • Thales

US law enforcement, intelligence, and commercial enterprisers are wrestling with pinpoint tracking in real time. My view is that the Hear and Know technology might lead to some hefty revenue opportunities. The company has begun to probe the US market. In January 2016 , Hear and Know received a silver medal certificate for innovation at the January 2016 Consumer Electronic Show in Las Vegas.

Hear and Know will be participating in the Pioneers festival in Vienna May 23 to 25, 2016 and in the Connected Conference in Paris, May 25 to 27, 2016. This summer, their next step will be looking for partners and fundings in the US.

To contact Hear and Know, write sales@hearandknow.eu.

Stephen E Arnold, May 18, 2016

Facebook and Humans: Reality Is Not Marketing

May 16, 2016

I read “Facebook News Selection Is in Hands of Editors Not Algorithms, Documents Show.” The main point of the story is that Facebook uses humans to do work. The idea is that algorithms do not seem to be a big part of picking out what’s important.

The write up comes from a “real” journalism outfit. The article points out:

The boilerplate about its [Facebook’s]  news operations provided to customers by the company suggests that much of its news gathering is determined by machines: “The topics you see are based on a number of factors including engagement, timeliness, Pages you’ve liked and your location,” says a page devoted to the question “How does Facebook determine what topics are trending?”

After reading this, I thought of Google’s poetry created by its artificial intelligence system. Here’s the line which came to mind:

I started to cry. (Source: Quartz)

I vibrate with the annoyance bubbling under the surface of the newspaper article. Imagine. Facebook has great artificial intelligence. Facebook uses smart software. Facebook open sources its systems and methods. The company says it is at the cutting edge of replacing humans with objective procedures.

The article’s belief in baloney is fried and served cold on stale bread. Facebook uses humans. The folks at real journalism outfits may want to work through articles like “Different Loci of Semantic Interference in Picture Naming vs. Word-Picture Matching Tasks” to get a sense of why smart systems go wandering.

So what’s new? Palantir Technologies uses humans to index content. Without that human input, the “smart” software does some useful work, but humans are part of the work flow process.

Other companies use humans too. But the marketing collateral and the fizzy presentations at fancy conferences paint a picture of a world in which cognitive, artificially intelligent, smart systems do the work that subject matter experts used to do. Humans, like indexers and editors, are no longer needed.

Now reality pokes is rose tinted fingertips into the real world.

Let me be clear. One reason I am not happy with the verbiage generated about smart software is one simple fact.

Most of the smart software systems require humans to fiddle at the beginning when a system is set up, while the system operates to deal with exceptions, and after an output is produced to figure out what’s what. In short, smart software is not that smart yet.

There are many reasons but the primary one is that the math and procedures underpinning many of the systems with which I am familiar are immature. Smart software works well when certain caveats are accepted. For example, the vaunted Watson must be trained. Watson, therefore, is not that much different from the training Autonomy baked into its IDOL system in the mid 1990s. Palantir uses humans for one simple reason. Figuring out what’s important to a team under fire with software works much better if the humans with skin in the game provide indexing terms and identify important points like local names for stretches of highway where bombs can be placed without too much hassle. Dig into any of the search and content processing systems and you find expenditures for human work. Companies licensing smart systems which index automatically face significant budget overruns, operational problems because of lousy outputs, and piles of exceptions to either ignore or deal with. The result is that the smoke and mirrors of marketers speaking to people who want a silver bullet are not exactly able to perform like the carefully crafted demonstrations. IBM i2 Analyst’s Notebook requires humans. Fast Search (now an earlobe in SharePoint) requires humans. Coveo’s system requires humans. Attivio’s system requires humans. OpenText’s suite of search and content processing requires humans. Even Maxxcat benefits from informed set up and deployment. Out of the box, dtSearch can index, but one needs to know how to set it up and make it work in a specific Microsoft environment. Every search and content processing system that asserts that it is automatic is spackling flawed wallboard.

For years, I have given a lecture about the essential sameness of search and content processing systems. These systems use the same well known and widely taught mathematical procedures. The great breakthroughs at SRCH2 and similar firms amount to optimization of certain operations. But the whiziest system is pretty much like other systems. As a result, these systems perform in a similar manner. These systems require humans to create term lists, look up tables of aliases for persons of interest, hand craft taxonomies to represent the chunk of reality the system is supposed to know about, and other “libraries” and “knowledgebases.”

The fact that Watson is a source of amusement to me is precisely because the human effort required to make a smart system work is never converted to cost and time statements. People assume Watson won Jeopardy because it was smart. People assume Google knows what ads to present because Google’s software is so darned smart. People assume Facebook mines its data to select news for an individual. Sure, there is automation of certain processes, but humans are needed. Omit the human and you get the crazy Microsoft Tay system which humans taught to be crazier than some US politicians.

For decades I have reminded those who listened to my lectures not to confuse what they see in science fiction films with reality. Progress in smart software is evident. But the progress is very slow, hampered by the computational limits of today’s hardware and infrastructure. Just like real time, the concept is easy to say but quite expensive and difficult to implement in a meaningful way. There’s a reason millisecond access to trading data costs so much that only certain financial operations can afford the bill. Smart software is the same.

How about less outrage from those covering smart software and more critical thinking about what’s required to get a system to produce a useful output? In short, more info and less puffery, more critical thinking and less sawdust. Maybe I imagined it but both the Google and Tesla self driving vehicles have crashed, right? Humans are essential because smart software is not as smart as those who believe in unicorns assume. Demos, like TV game shows, require pre and post production, gentle reader.

What happens when humans are involved? Isn’t bias part of the territory?

Stephen E Arnold, May 16, 2016

Watson Does Cyber Security

May 10, 2016

I heard a rumor that Palantir Technologies has turned down the volume on its cybersecurity initiative. I was interested to learn that IBM is jumping into this niche following the lead of its four star general Thomas “Weakly” Watson.

According to “IBM’s Watson Is Going to Cybersecurity School,” General Watson “announced a new year-long research project through which it will collaborate with eight universities to help train its Watson artificial-intelligence system to tackle cybercrime.”

A number of capable outfits are attacking this market sector. Instead of buying a high octane outfit, I learned:

This fall, it will begin working with students at universities including California State Polytechnic University at Pomona, Penn State, MIT, New York University and the University of Maryland at Baltimore County along with Canada’s universities of New Brunswick, Ottawa and Waterloo.

Never give up. Forward, march.

Stephen E Arnold, May 10, 2016

Artificial Intelligence Spreading to More Industries

May 10, 2016

According to MIT Technology Review, it has finally happened. No longer is artificial intelligence the purview of data wonks alone— “AI Hits the Mainstream,” they declare. Targeted AI software is now being created for fields from insurance to manufacturing to health care. Reporter Nanette Byrnes  is curious to see how commercialization will affect artificial intelligence, as well as how this technology will change different industries.

What about the current state of the AI field? Byrnes writes:

“Today the industry selling AI software and services remains a small one. Dave Schubmehl, research director at IDC, calculates that sales for all companies selling cognitive software platforms —excluding companies like Google and Facebook, which do research for their own use—added up to $1 billion last year. He predicts that by 2020 that number will exceed $10 billion. Other than a few large players like IBM and Palantir Technologies, AI remains a market of startups: 2,600 companies, by Bloomberg’s count. That’s because despite rapid progress in the technologies collectively known as artificial intelligence—pattern recognition, natural language processing, image recognition, and hypothesis generation, among others—there still remains a long way to go.”

The article examines ways some companies are already using artificial intelligence. For example, insurance and financial firm USAA is investigating its use to prevent identity theft, while GE is now using it to detect damage to its airplanes’ engine blades. Byrnes also points to MyFitnessPal, Under Armor’s extremely successful diet and exercise tracking app. Through a deal with IBM, Under Armor is blending data from that site with outside research to help better target potential consumers.

The article wraps up by reassuring us that, despite science fiction assertions to the contrary, machine learning will always require human guidance. If you doubt, consider recent events—Google’s self-driving car’s errant lane change and Microsoft’s racist chatbot. It is clear the kids still need us, at least for now.

 

Cynthia Murrell, April 10, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Semantics Made Easier

May 9, 2016

For fans of semantic technology, Ontotext has a late spring delight for you. The semantic platform vendor Ontotext has released GraphDB 7. I read “Ontotext Releases New Version of Semantic Graph Database.” According to the announcement, set up and data access are easier. I learned:

The new release offers new tools to access and explore data, eliminating the need to know everything about the dataset before start working with it. GraphDB 7 enables users to navigate their way through third-party and any other dataset regardless of data volumes, which makes it a powerful Big Data analytics tool. Ver.7 offers visual exploration of the loaded data schema – ontology, interactive query builder for better entity retrieval, and full support for RDF 1.1 allowing smooth import of a huge number of public Open Data as well as proprietary Linked Datasets.

If you want to have a Palantir-type system, check out Ontotext. The company is confident that semantic technology will yield benefits, a claim made by other semantic technology vendors. But the complexity challenges associated with conversion and normalization of content is likely to be a pebble in the semantic sneaker.

Stephen E Arnold, May 9, 2016

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta