SLI Search: Loss Narrows for $35 Million Business
September 14, 2016
SLI Systems offers an eCommerce search system. If you followed the history of NBC’s search efforts, you may know that SLI Systems has some DNA from Snap Search. The company is an interesting one. It competes with EasyAsk, another eCommerce search vendor.
SLI released its financial results in a news release titled “SLI Systems Announces Financial Research for the Year to 30 June 2016.” (Some news releases have the ability to disappear or become a pay to play feature. The release was online an free as of September 6, 2016.)
The write up confirmed what most stakeholders in search and content processing systems may avoid thinking about: Generating revenue in today’s economic climate is difficult.
SLI Systems is a $35 million dollar company. The firm lost several big accounts for a range of reasons. The good news is that instead of losing $7 million in FY2015, SLI reported a before tax loss of $162,000. There are no details about what caused the hefty loss 12 months ago or what a new management team to reduce the shortfall by almost $8 million. Great management? Magic?
I circled this chunk of management explanation:
SLI Systems Chairman Greg Cross said: “The 2016 financial year has been a period of significant change for the company. Chris Brennan took over as Chief Executive Officer in October 2015 and since then we have recruited three key executives: a new Chief Revenue Officer, a new Chief Marketing Officer and a new Vice President of Customer Success. Drawing on the expertise of these new recruits and the broader management team, SLI has put in place new business processes and organizational structures to lift the performance of the business for the long term.
He added:
“The company remains in a strong financial position. Although we expect net cash outflows in the coming year as we return to a growth trajectory, we remain confident that we have sufficient cash resources to support the company’s plan. We are looking forward to the remainder of the year with cautious optimism,” Mr. Cross said.
SLI is based in New Zealand. The mot recent version of the company’s Web site does not make it easy to locate the company’s address at 78 – 106 Manchester Street. Christchurch 8011. New Zealand. New Zealand Phone: 0800 754 797. The company’s office appears to be in the Enterprise Precinct Innovation Center. The firm has an office in San Jose, California. SLI’s office locations are available at this link.
Stephen E Arnold, September 14, 2016
OpenText: Documentum Enters the Canadian Wilderness
September 14, 2016
Documentum is an outfit that some big companies have to use. Other big outfits have hired integrators like IBM to make Documentum the go to system for creating laws and regulations. Other companies looking for a way to keep track of digital information believed the hyperbole about Documentum. Sure, one can get Documentum to “work.” But like other large scale, multipurpose content processing and management systems, considerable expertise, money, and time are often necessary. Documentum is now more than a quarter century young. Like other giant companies buying late 1980s technology, the job of generating sufficient cash flow is a big one. How is that acquisition of Autonomy going, Hewlett Packard? Oh, right. HP sold Autonomy and has a date in court related to that deal. What about Lexmark and ISYS Search Software? Are those empty offices an indication of rough water? What about IBM and Vivisimo? Oracle and Endeca? Dassault and Exalead? You get the idea. Buy a search vendor and discover that the demand for cash to make the systems hop, skip, and jump are significant. Then there is the pesky problem of open source software. Magento, anyone?
Now OpenText has purchased one of the US Food and Drug Administration’s all time favorite software systems. No doubt that visions of big bucks, juicy renewals, and opportunities to sell hot OpenText properties like BASIS, Fulcrum, and BRS Search are dancing in the heads of the Canadian business wizards.
I learned that OpenText is the proud new owner of Documentum. You can read the details, such as they are, in “OpenText Signs Deal for Dell EMC Division.” I learned that Documentum carried a price tag of $1.62 billion, a little more than what Oracle paid for Endeca and what Microsoft paid for the fascinating and legally confused Fast Search & Transfer content processing systems. OpenText, to its credit, paid one tenth the amount Hewlett Packard paid for Autonomy.
I learned:
“This acquisition further strengthens OpenText as a leader in enterprise information management, enabling customers to capture their digital future and transform into information-based businesses,” OpenText CEO Mark Barrenechea said in a statement Monday. “We are very excited about the opportunities which ECD and Documentum bring, and I look forward to welcoming our new customers, employees, and partners to OpenText.”
I also noted “Moody’s Places Open Text (OTEX) Ratings on Review for Downgrade.” That write up informed me:
Open Text plans to finance the acquisition with a combination of cash on hand, debt and equity. If the company raises equity to finance a significant portion of the purchase price, the Ba1 CFR will likely be confirmed. In a scenario where the company funds the acquisition with just cash on hand and new debt, the Ba1 CFR could face downward pressure. However, in such case Moody’s would evaluate the company’s ongoing commitment and capacity to de-lever, which could mitigate downward rating pressure. Negative ratings movement related to the CFR, if any, would be limited to one notch.
This is financial double talk for we are just not that confident that OpenText can make this deal spew revenue growth and hefty, sustainable profits. But my interpretation is fueled by Kentucky creek water. Your perception may differ. May I suggest you put your life savings into OpenText stock if you see rainbows, unicorns, and tooth fairies in this deal.
I noted this passage:
Open Text has made over $3 billion of acquisitions since 2005 and although the company does not break out results of acquired companies, EBITDA margins have increased to 35% from 17% over this period.
Get out your checkbook. Let the good times roll.
My view from rural Kentucky is less optimistic. Here are the points I noted on my Dollar General notepad as I worked through the articles about this deal:
- Michael Dell was quick to dump Documentum, underscoring the silliness of EMC’s rationale for buying the company in 2003 for about $1.7 billion
- The cost of maintaining Documentum server and the eight acquired company’s technology is likely to be tough to control
- The money needed to keep a 25 year old platform in tip top shape to compete with more youthful alternatives makes me wonder how OpenText will finance innovation
- The open source alternatives, whether for nifty NoSQL methods or clones of more traditional content management systems constructed by programmers with time on their hands, are likely to be a challenge.
To sum up, OpenText is a roll up of overlapping and often competing products and services. I hope the OpenText marketing department is able to sort out when to use which OpenText product. If customers are not confused, that’s good. If the customers are confused, the time to close a deal for a giant, rest home qualified software is likely to be lengthy.
OpenText is much loved by those in Canada. I recall the affection felt for Blackberry. Stakeholders will be watching OpenText to make sure that it does not mix up raspberries and blackberries. Blackberries, by the way, have “drupelets.” That sounds like Drupal to me.
Stephen E Arnold, September 14, 2016
Is the UK Tolling the App Death Knell for Government Services?
September 14, 2016
The article titled Why Britain Banned Mobile Apps on GovInsider introduces Ben Terret and the innovative UK Government Digital Service program, the first of its kind in the world. Terret spearheaded a strict “no apps” policy in favor of websites while emphasizing efficiency, clarity, cost savings, and relevance of the information. This all adds up to creating a simple and streamlined experience for UK citizens. Terret explains why this approach is superior in an app-crazed world,
Apps are “very expensive to produce, and they’re very very expensive to maintain because you have to keep updating them when there are software changes,” Terrett says. “I would say if you times that by 300, you’re suddenly talking about a huge team people and a ton of money to maintain that ecosystem”…Sites can adapt to any screen size, work on all devices, and are open to everyone to use regardless of their device.
So what do these websites look like? They are clean, simple, and operated under the assumption that “Google is the homepage.” Terrett measures the success of a given digital services by monitoring how many users complete a transaction, or how many continued to search for additional information, documents, or services. Terrett’s argument against apps is a convincing one, especially based on the issue of cutting expenses. Whether this argument translates into the private sector is another question.
Chelsea Kerwin, September 14, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/
Mobile Data May Help Fight Disease
September 14, 2016
Data from smartphones and other mobile devices may give us a new tool in the fight against communicable diseases. Pen State News reports, “Walking and Talking Behaviors May Help Predict Epidemics and Trends.” A recent study, completed by an impressive roster of academics at several institutions, reveals a strong connection between our movements and our communications. So strong, in fact, that a dataset on one can pretty accurately predict the other. The article cites one participant, researcher Dashun Wang of Penn State:
[Wang] added that because movement and communication are connected, researchers may only need one type of data to make predictions about the other phenomenon. For instance, communication data could reveal information about how people move. …
The equation could better forecast, among other things, how a virus might spread, according to the researchers, who report their findings today (June 6) in the Proceedings of the National Academy of Sciences. In the study, they tested the equation on a simulated epidemic and found that either location or communication datasets could be used to reliably predict the movement of the disease.
Perhaps not as dramatic but still useful, the same process could be used to predict the spread of trends and ideas. The research was performed on three databases full of messages from users in Portugal and another (mysteriously unidentified) country and on four years of Rwandan mobile-phone data. These data sets document who contacted whom, when, and where.
Containing epidemics is a vital cause, and the potential to boost its success is worth celebrating. However, let us take note of who is funding this study: The U.S. Army Research Laboratory, the Office of Naval Research, the Defense Threat Reduction Agency and the James S. McDonnell Foundation’s program, Studying Complex Systems. Note the first three organizations in the list; it will be interesting to learn what other capabilities derive from this research (once they are unclassified, of course).
Cynthia Murrell, September 14, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/
HonkinNews, September 13, 2016 Now Available
September 13, 2016
Interested in having your polynomials probed? The Beyond Search weekly news explains this preventive action. In this week’s program you will learn about Google new enterprise search solution. Palantir is taking legal action against an investor in the company. IBM Watson helps out at the US Open. Catch up on the search, online, and content processing news that makes the enterprise procurement teams squirm. Dive in with Springboard and Pool Party. To view the video, click this link.
Kenny Toth, September 13, 2016
SAP In Memory: Conflicts of Opinion
September 13, 2016
I was surprised by the information presented in “SAP Hana Implementation Pattern Research Yields Contradictory Results.” My goodness, I thought, an online publication actually presents some ideas that a high profile system may not be a cat fully dressed in pajamas.
The SAP Hana system is a database. The difference between Hana and the dozens of other allegedly next generation data management solutions is its “in memory, columnar database platform.” If you are not hip to the lingo of the database administrators who clutch many organizations by the throat, an in memory approach is faster than trucking back to a storage device. Think back to the 1990s and Eric Brewer or the teens who rolled out Pinpoint.
The columnar angle is that data is presented in stacks with each item written on a note card. The mapping of the data is different from a row type system. The primary key in a columnar structure is the data, which maps back to the the row identification.
The aforementioned article points to a mid tier consulting firm report. That report by an outfit called Nucleus Research. Nucleus, according to the article, “revealed that 60 percent of SAP reference customers – mostly in the US – would not buy SAP technology again.” I understand that SAP engenders some excitement among its customers, but a mid tier consulting firm seems to be demonstrating considerable bravery if the data are accurate. Many mid tier consulting firms sand the rough edges off their reports.
The article then jumps to a report paid for by an SAP reseller, which obviously has a dog in the Nucleus fight. Another mid tier research outfit called Coleman Parks was hired to do another study. The research focused on 250 Hana license holders.
The results are interesting. I learned from the write up:
When asked what claims for Hana were credible, 92% of respondents said it reduced IT infrastructure costs, a further 87% stated it saved business costs. Some 98% of Hana projects came in on-budget, and 65% yet to roll out were confident of hitting budget.
Yep, happy campers who are using the system for online transactional processing and online analytical processing. No at home chefs tucking away their favorite recipes in Hana I surmise.
However, the report allegedly determined what I have known for more than a decade:
SAP technology is often deemed too complex, and its CEO Bill McDermott has been waging a public war against this complexity for the past few years, using the mantra Run Simple.
The rebuttal study identified another plus for Hana:
“We were surprised how satisfied the Hana license holders were. SAP has done a good job in making sure these projects work, and rate at which has got Hana out is amazing for such a large organization,” said Centiq director of technology and services Robin Webster. “We had heard a lot about Hana as shelfware, so we were surprised at the number saying they were live.”
From our Hana free environment in rural Kentucky, we think:
- Mid tier consulting firms often output contradictory findings when reviewing products or conducting research. If there is bias in algorithms, imagine what might luck in the research team members’ approaches
- High profile outfits like SAP can convince some of the folks with dogs in the fight to get involved in proving that good things come to those who have more research conducted
- Open source data management systems are plentiful. Big outfits like Hewlett Packard, IBM, and Oracle find themselves trying to generate the type of revenue associated with proprietary, closed data management products at a time when fresh faced computer science graduates just love free in memory solutions like Memsql and similar solutions.
SAP mounted an open source initiative which I learned about in “SAP Embraces Open Source Sort Of.” But the real message for me is that one can get mid tier research firms to do reports. Then one can pick the one that best presents a happy face to potential licensees.
Here in Harrod’s Creek, the high tech crowd tests software before writing checks. No consultants required.
Stephen E Arnold, September 13, 2016
True or False: Google Fakes Results for Social Engineering
September 13, 2016
Here in Harrod’s Creek, we love the Alphabet Google thing. When we read anti Google articles, we are baffled. Why don’t these articles love and respect the GOOG as we do? A case in point is “How Google’s Search Engines Use Faked Results for Social Engineering.” The loaded words “faked results” and “social engineering” put us on our guard.
What is the angle the write up pursues? Let’s look.
I highlighted this passage as a way get my intellectual toe in the murky water:
Google published an “overview” of how SEO works, but in a nutshell, Google searches for the freshest, most authoritative, easiest-to-display (desktop/laptop and mobile) content to serve its search engine users. It crawls, caches (grabs) content, calculates the speed of download, looks at textual content, counts words to find relevance, and compares how it looks on different sized devices. It not only analyzes what other sites link to it, but counts the number of these links and then determines their quality, meaning the degree to which the links in those sites are considered authoritative. Further, there are algorithms in place that block the listing of “spammy” sites, although, spam would not be relevant here. And recently, they have claimed to boost sites using HTTPS to promote security and privacy (fox henhouse?).
I am not sure about the “fox hen house” reference because fox is a popular burgoo addition. As a result the critters are few and far between. Too bad. They are tasty and their tails make nifty additions to cold weather parkas.
The author of the write up is not happy with how Google responds to a query for “Jihad.” I learned:
Google’s search results give pride of place to IslamicSupremeCouncil.org. The problem, according to the write up, is that this site is not a big hitter in the Jihad content space.
The article points out that Google does not return the search results the person running the test queries expected. The article points out:
When someone in the US, perhaps wanting to educate themselves on the subject, searches for “Jihad” and sees the Islamic Supreme Council as the top-ranked site, the perception is that this is the global, unbiased and authoritative view. If they click on that first, seemingly most popular link, their perception of Jihad will be skewed by the beliefs and doctrine of this peaceful group of people. These people who merely dabble on the edge of Islamic doctrine. These people who are themselves repeatedly targeted for their beliefs that are contrary to those of the majority of Muslims. These people who do not even come close to being any sort of credible or realistic representation of the larger and more prevalent subscribers (nay soldiers) of the “Lesser Jihad” (again, the violent kind).
My thought is that the results I expect from any ad supported, publicly accessible search system are rarely what I expect. The more I know about a particular subject—how legacy search system marketing distorts what the systems can actually do—the more disappointed I am with the search results.
I don’t think Google is intentionally distorting search results. Certain topics just don’t match up to the Google algorithms. Google is pretty good at sports, pizza, and the Housewives of Beverly Hills. Google is not particularly good with fine grained distinctions in certain topic spaces.
If the information presented by, for instance, the Railway Retirement Board is not searched, the Google system does its best to find a way to sell an ad against a topic or word. In short, Google does better with certain popular subjects which generate ad revenue.
Legacy enterprise search systems like STAIRS III are not going to be easy to search. Nailing down the names of the programmers in Germany who worked on the system and how the STAIRS III system influenced BRS Search is a tough slog with the really keen Google system.
If I attribute Google’s indifference to information about STAIRS III to a master scheme put in place by Messrs. Brin and Page, I would be giving them a heck of a lot of credit for micro managing how content is indexed.
The social engineering angle is more difficult for me to understand. I don’t think Google is biased against mainframe search systems which are 50 years old. The content, the traffic, and the ad focus pretty much guarantee that STAIRS III is presented in a good enough way.
The problem, therefore, is that Google’s whiz kid technology is increasingly good enough. That means average or maybe a D plus. The yardstick is neither precision nor recall. At Google, revenue counts.
Baidu, Bing, Silobreaker, Qwant, and Yandex, among other search systems, have similar challenges. But each system is tending to the “good enough” norm. Presenting any subject in a way which makes a subject matter expert happy is not what these systems are tuned to do.
Here in Harrod’s Creek, we recognize that multiple queries across multiple systems are a good first step in research. Then there is the task of identifying individuals with particular expertise and trying to speak with them or at least read what they have written. Finally, there is the slog through the dead tree world.
Expecting Google or any free search engine to perform sophisticated knowledge centric research is okay. We prefer the old fashioned approach to research. That’s why Beyond Search documents some of the more interesting approaches revealed in the world of online analysis.
I like the notion of social engineering, particularly the Augmentext approach. But Google is more interested in money and itself than many search topics which are not represented in a way which I would like. Does Google hate me? Nah, Google doesn’t know I exist. Does Google discriminate against STAIRS III? Nah, of Google’s 65,000 employees probably fewer than 50 know what STAIRS III is? Do Googlers understand revenue? Yep, pretty much.
Stephen E Arnold, September 13, 2016
Toshiba Amps up Vector Indexing and Overall Data Matching Technology
September 13, 2016
The article on MyNewsDesk titled Toshiba’s Ultra-Fast Data Matching Technology is 50 Times Faster than its Predecessors relates the bold claims swirling around Toshiba and their Vector Indexing Technology. By skipping the step involving computation of the distance between vectors, Toshiba has slashed the time it takes to identify vectors (they claim). The article states,
Toshiba initially intends to apply the technology in three areas: pattern mining, media recognition and big data analysis. For example, pattern mining would allow a particular person to be identified almost instantly among a large set of images taken by surveillance cameras, while media recognition could be used to protect soft targets, such as airports and railway stations*4by automatically identifying persons wanted by the authorities.
In sum, Toshiba technology is able to quickly and accurately recognize faces in the crowd. But the specifics are much more interesting. Current technology takes around 20 seconds to identify an individual out of 10 million, and Toshiba can do it in under a second. The precision rates that Toshiba reports are also outstanding at 98%. The world of Minority Report, where ads recognize and direct themselves to random individuals seems to be increasingly within reach. Perhaps more importantly, this technology should be of dire importance to the criminal and perceived criminal populations of the world
Chelsea Kerwin, September 13, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monographThere is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/
Elastic Links Search and Social Through Graph Capabilities
September 13, 2016
The article titled Confused About Relationships? Elasticsearch Gets Graphic on The Register communicates the latest offering from Elasticsearch, the open-source search server based on Apache’s Lucene. Graph capabilities are an exciting new twist on search that enables users to map out relationships through the search engine and the Kibana data visualization plug-in. The article explains,
By fusing graph with search, Elastic hopes to combine the power of social with that earlier great online revolution, the revolution that gave us Google: search. Graph in Elasticsearch establishes relevance by establishing the significance of each relationship versus the global average to return important results. That’s different to what Elastic called “traditional” relationship mapping, which is based on a count of the frequency of a given relationship.
Elasticsearch sees potential for their Graph capabilities in behavioral analysis, particularly in areas such as drug discovery, fraud detection, and customized medicine and recommendations. When it comes to identifying business opportunities, Graph databases have already proven their value. Discovering connections and trimming degrees of separation are all of vital importance in social media. Social networks like Twitter have been using them since the beginning of NoSQL. Indeed, Facebook is a customer of Elastic, the business version of Elasticsearch that was founded in 2012. Other users of Elasticsearch include Netflix, StumbleUpon, and Mozilla.
Chelsea Kerwin, September 13, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/
Autonomy Back Home in Merrie Olde England
September 12, 2016
I read “Hewlett Packard Offloads Last Autonomy Assets in Software Deal.” I think that Autonomy is now going back home. Blood pudding, the derbies, and Indian take aways—yes, the verdant isle.
The union of Hewlett Packard (once an ink outfit) and the love child of Bayesian and Laplacian methods is burst asunder. HPE (the kissin’ cousin of the ink outfit) fabricated a deal only lawyers, MBAs, and accountants can conjure.
There is an $8 billion deal, cash to HPE, and a fresh swath of lush pasture for Micro Focus to cultivate.
I learned:
“Autonomy doesn’t really exist as an entity, just the products,” said Kevin Loosemore, executive chairman of Micro Focus. Loosemore said the Newbury-based business conducted due diligence across all of the products included in the deal, with no different approach taken for the Autonomy assets. No legal liabilities from Autonomy will be transferred to Micro Focus.
Integration is what Micro Focus does. Autonomy embodied in products was once a goal for some senior Autonomy executives. The golden sun is rising over the mid 1990s technology.
We wish Micro Focus well. We wish HPE well as it moves toward the resolution of its claims against Autonomy for assorted misdeeds.
Without search, HPE ceases to interest me. While HPE was involved in search, there was some excitement generated, but that is winding down and, for some I imagine, has long since vaporized.
I will have fond memories of HP blaming Autonomy for HP’s decision to buy Autonomy. Amazing. One of the great comedic moments in search and fading technology management.
Autonomy is dead. Long live Autonomy. Bayes lasted 60 years; Autonomy may have some legs even if embodied in other products. IDOL hands are the devil’s playthings I think. PS. I will miss the chipper emails from BM.com. Substantive stuff.
Stephen E Arnold, September 12, 2016