June 23, 2016
Remember when Nest Labs had all the hype a few years ago? An article from BGR reminds us how the tides have turned: Even Google views its Nest acquisition as a disappointment. It was in 2014 that Google purchased Nest Labs for $3.2 billion. Their newly launched products, a wifi smoke alarm and thermostat, at the time seemed to the position the company for greater and greater success. This article offers a look at the current state:
“Two and a half years later and Nest is reportedly in shambles. Recently, there have been no shortage of reports suggesting that Nest CEO Tony Fadell is something of a tyrannical boss cut from the same cloth as Steve Jobs (at his worst). Additionally, the higher-ups at Google are reportedly disappointed that Nest hasn’t been able to churn out more hardware. Piling it on, Re/Code recently published a report indicating that Nest generated $340 million in revenue last year, a figure that Google found disappointing given how much it spent to acquire the company. And looking ahead, particulars from Google’s initial buyout deal with Nest suggest that the pressure for Nest to ramp up sales will only increase.”
Undoubtedly there are challenges when it comes to expectations about acquired companies’ performance. But when it comes to the nitty gritty details of the work happening in those acquisitions, aren’t managers supposed to solve problems, not simply agree the problem exists? How the success of “internet of things” companies will pan out seems to be predicated on their inherent interconnectedness — that seems to apply at both the levels of product and business.
Megan Feil, June 23, 2016
June 23, 2016
Through their Press Room site, ZyLab announces, “Zylab Introduces eDiscovery as a Service.” Billed as a cost-saving alternative to in-house solutions, the new platform allows users to select and pay for only the services they need through a monthly subscription. The press-release tells us:
“ZyLAB today announces that its eDiscovery solutions are now also delivered via the Internet in a software-as-a-service (SaaS) model in EMEA and AP via a managed service provider model. ZyLAB’s eDiscovery as a Service is introduced as the cost-effective alternative for organizations that do not have the time or IT resources to bring an eDiscovery solution in house. …
“With ZyLAB’s eDiscovery as a Service every type of company, in every industry can now easily scope the level of system they require. ZyLAB’s services span the entire Electronic Discovery Reference Model (EDRM) so a company can select the precise services that meet the needs of their current matter. The Service Level Agreement (SLA) will outline those selections and guarantee the availability of the data, ZyLAB’s software, and ongoing maintenance from ZyLAB’s Professional Services consultants.”
We are assured ZyLab’s SaaS solutions are of the same caliber as their on-premises solutions. This approach can save a lot of time and hassle, especially for companies without a dedicated IT department. The write-up notes there are no long-term contracts or volume constraints involved,
and, of course, no new hardware to buy. If a company is willing to trust their data to a third party’s security measures, this could be a cost-effective way to manage eDiscovery.
Of course, if you were to trust anyone with your sensitive data, ZyLab’s record makes them a good choice. In fact, the company has been supplying eDiscovery and Information Government tech to prominent organizations for over three decades now. Large corporations, government organizations, regulatory agencies, and law firms around the world rely on their eDiscovery platform. The company was founded in 1983, with the release of the first full-text retrieval software for the PC. It’s eDiscovery/ Information Management platform was released in 2010.
Cynthia Murrell, June 23, 2016
June 22, 2016
While Dark Web users understand the perks of anonymity, especially for those those involved with illicit activity, consistency in maintaining that anonymity appears to be challenging. Geek.com published an article that showcases how one drug dealer revealed his identity while trying to promote his brand: Drug dealer busted after trying to trademark his dark web username. David Ryan Burchard of Merced, California reportedly made $1.25 million by selling marijuana and cocaine on the Dark Web before he trademarked the username he used to sell drugs, “caliconnect”. The article summarizes,
“He started out on Silk Road and moved on to other shady marketplaces in the wake of its highly-publicized shutdown. Burchard wound up on Homeland Security’s list of top sellers, though they were having trouble establishing a rock-solid connection between him and his online persona. They knew that Burchard was accumulating a large Bitcoin stash and that there didn’t appear to be a legitimate source. Then, finally, investigators got the break they were looking for. It seems that Burchard decided that his personal brand was worth protecting, and he filed paperwork to trademark “caliconnect.””
Whether this points to the proclivity of human nature to self-promote or the egoism of one person in a specific situation, it seems that all covering the story are drawing attention to this foiling move as a preventable mistake on Burchard’s part. Look no farther than the title of a recent Motherboard article: Pro-Tip: If You’re a Suspected Dark Web Drug Dealer, Don’t Trademark Your #Brand. The nature of promotions and marketing on the Dark Web will be an interesting area to see unfold.
Megan Feil, June 22, 2016
June 20, 2016
The article titled Self-Service Data Prep is the Next Big Thing for BI on Datanami digs into the quickly growing data preparation industry by reviewing the Dresner Advisory Services study. The article provides a list of the major insights from the study and paints a vivid picture of the current circumstances. Most companies often perform end-user data preparation, but only a small percentage (12%) find themselves to be proficient in the area. The article states,
“Data preparation is often challenging, with many organizations lacking the technical resources to devote to comprehensive data preparation. Choosing the right self-service data preparation software is an important step…Usability features, such as the ability to build/execute data transformation scripts without requiring technical expertise or programming skills, were considered “critical” or “very important” features by over 60% of respondents. As big data becomes decentralized and integrated into multiple facets of an organization, users of all abilities need to be able to wrangle data themselves.”
90% of respondents agreed on the importance of two key features: the capacity to aggregate and group data, and a straightforward interface for implementing structure on raw data. Trifacta earned the top vendor ranking of just under 30 options for the second year in a row. The article concludes by suggesting that many users are already aware that data preparation is not an independent activity, and data prep software must be integrated with other resources for success.
Chelsea Kerwin, June 20, 2016
June 17, 2016
Digital assistants are smarter than ever. I remember when PDAs were the wave of the future and meant to revolutionize lives, but they still relied on human input and did not have much in the ways of artificial intelligence. Now Cortana, Siri, and Alexa respond to vocal commands like an episode of Star Trek. Digital assistants are still limited in many ways, but according to Venture Beat Alexa might be changing how we interact with technology: “How Amazon’s Alexa Is Bringing Intelligent Assistance Into The Mainstream”.
Natural language processing teamed with artificial intelligence has made using digital assistants easier and more accepted. Predictive analytics specialist MindMeld commissioned a “user adoption survey” of voice-based intelligent assistants and the results show widespread adoption.
Amazon’s Echo teamed with the Alexa speech-enabled vocal device are not necessarily dominating the market, but Amazon is showing the potential for an intelligent system with added services like Uber, music-streaming, financial partners, and many more.
“Such routine and comfort will be here soon, as IA acceptance and use continue to accelerate. What started as a novelty and source of marketing differentiation from a smartphone manufacturer has become the most convenient user interface for the Internet of Things, as well as a plain-spoken yet empathetic controller of our digital existence.”
Amazon is on the right path as are other companies experimenting with the digital assistant. My biggest quip is that all of these digital assistants are limited and have a dollar sign attached to them greater than some people’s meal budgets. It is not worth investing in an intelligent assistant, unless needed. I say wait for better and cheaper technology that will be here soon.
June 15, 2016
I read “Data Lakes vs Data Streams: Which Is Better?” The answer seems to me to be “both.” Streams are now. Lakes are “were.” Who wants to make decisions based on historical data. On the other hand, real time data may mislead the unwary data sailor. The write up states:
The availability of these new ways [lakes and streams] of storing and managing data has created a need for smarter, faster data storage and analytics tools to keep up with the scale and speed of the data. There is also a much broader set of users out there who want to be able to ask questions of their data themselves, perhaps to aid their decision making and drive their trading strategy in real-time rather than weekly or quarterly. And they don’t want to rely on or wait for someone else such as a dedicated business analyst or other limited resource to do the analysis for them. This increased ability and accessibility is creating whole new sets of users and completely new use cases, as well as transforming old ones.
Good news for self appointed lake and stream experts. Bad news for a company trying to figure out how to generate new revenues.
The first step may be to answer some basic questions about what data are available, their reliability, and what person “knows” about data wrangling. Worrying about lakes and streams before one knows if the water is polluted is a good idea before diving into the murky waters.
Stephen E Arnold, June 15, 2016
June 13, 2016
I noted two items which reminded me why I enjoy Sillycon Valley techno wizardry. The first item concerns the Hulk Hogan Gawker matter. The story “Gawker Files for Bankruptcy and Says It Will Sell the Company to Ziff Davis or Someone Else” converted to a quasi emoji in my addled goose brain; to wit:
My hunch is that anyone who wants to annoy the founder of Palantir Technologies, may want to consider the risks. That splat is ugly and may be blended with an aniline dye.
The other item makes clear that the Alphabet Google thing is an objective algorithmic construct, kissed by the golden Sillycon Valley sun. Navigate to “There’s No Evidence That Google Is Manipulating Searches to Help Hillary Clinton.” Therein resides the truth. I learned:
Apparently, Google has a policy of not suggesting that customers do searches on people’s crimes. I have no inside knowledge of why it runs its search engine this way. Maybe Google is just uncomfortable with having an algorithm suggesting that people search for other people’s crimes. In any event, there’s no evidence that this is specific to Hillary Clinton, and therefore no reason to think this is a conspiracy by Google to help Clinton win the election.
Definitely rock solid from a person whose brother works at Google. Even more reason to accept the Sillycon Valley objectivity argument.
Stephen E Arnold, June 13, 2016
June 13, 2016
The article on CMSWire titled Recommind Adds Muscle to Cloud e-Discovery relates the upgrades to the Axcelerate e-Discovery platform from Recommind. The muscle referred to in the article title is the new Efficiency Scoring feature offered to increase e-discovery review process transparency by tracking efficiency and facilitating a consistent assessment. The article explains,
“Axcelerate Cloud is built on Recommind’s interactive business intelligence layer to give legal professionals a depth of insight into the e-discovery process that Recommind says they have previously lacked. Behind all the talk of agility and visibility, there is one goal here: control. The company hopes this release allays the fears of legal firms, who traditionally have been reluctant to use cloud-based software for fear of compromising data.”
Hal Marcus, Director of Product Marketing at Recommind, suggested that in spite of early hesitancy by legal professional to embrace the cloud, current legal teams are more open to the possibilities available through consolidation of discovery requirements in the cloud. According to research, there are no enterprise legal departments without cloud-based legal resources related to contract management, billing, or e-discovery. Axcelerate Cloud aims to promote visibility into discovery practices to address the major concern among legal professionals: insufficient insight and transparency.
Chelsea Kerwin, June 13, 2016
June 10, 2016
Libraries are more than place to check out free DVDs and books and use a computer. Most people do not believe this and if you try to tell them otherwise, their eyes glaze offer and they start chanting “obsolete” under their breath. BoingBoing, however, agrees that “How Libraries Can Save The Internet Of Things From The Web’s Centralized Fate”. For the past twenty years, the Internet has become more centralized and content is increasingly reliant on proprietary sites, such as social media, Amazon, and Google.
Back in the old days, the greatest fear was that the government would take control of the Internet. The opposite has happened with corporations consolidating the Internet. Decentralization is taking place, mostly to keep the Internet anonymous. Usually, these are tied to the Dark Web. The next big thing in the Internet is “the Internet of things,” which will be mostly decentralized and that can be protected if the groundwork is laid now. Libraries can protect decentralized systems, because
“Libraries can support a decentralized system with both computing power and lobbying muscle. The fights libraries have pursued for a free, fair and open Internet infrastructure show that we’re players in the political arena, which is every bit as important as servers and bandwidth. What would services built with library ethics and values look like? They’d look like libraries: Universal access to knowledge. Anonymity of information inquiry. A focus on literacy and on quality of information. A strong service commitment to ensure that they are available at every level of power and privilege.”
Libraries can teach people how to access services like Tor and disseminate the information to a greater extent than many other institutes within the community. While this is possible, in many ways it is not realistic due to many factors. Many of the decentralized factors are associated with the Dark Web, which is held in a negative light. Libraries also have limited budgets and trying to install a program like this will need finances, which the library board might not want to invest in. Also comes the problem of locating someone to teach these services. Many libraries are staffed by librarians that are limited in their knowledge, although they can learn.
It is possible, it would just be hard.
June 9, 2016
Rogue trading has always been a problem for the stock market, but the more technology advances the easier it becomes for rogue traders to take advantage. The good news is that security and compliance officers can use the same tools that rogue traders use in their schemes to stop them. CNBC showed the story; “Tech Takes On Rogue Traders” that explains how technology is being used to stop the bad guys. The report is described as:
“Colleen Graham, Chief Supervisory Officer at Signac, discusses Palantir and Credit Suisse’s joint technology initiative to crack down on rogue traders.”
Palantir Technology is being used along with Credit Suisse to monitor trader behavior data trade data, risk data, and market data to monitor how a trader changes over time. They compare individual trader to others invested in similar stocks. Using a combination of all these data fields, unusual behavior is monitored to prevent rogue trading.
The biggest loss on Wall Street is rogue trading. The data Signac gathers helps figure out how rogue trading happens and what causes it. By using analytical software, compliance officers are able to learn from past crimes and teach the software to recognize similar patterns. In turn, this allows them to prevent future crimes. While some false positives are generated, all of the captured data is public. Supervisors and other people actually are supposed to read this data; Signac just does so at a more in-depth level.
Catching rogue traders helps keep Wall Street running smoother and even puts the stockbrokers and other financial force back to work.
Palantir scored a new deal from this venture. The same technology used to monitor the Dark Web is used to capture rogue traders.