March 8, 2017
According to Scylla, their latest release is currently the fastest NoSQL database. We learn about the update from SiliconAngle’s article, “ScyllaDB Revamps NoSQL Database in 1.3 Release.” To support their claim, the company points to a performance benchmark test executed by the Yahoo Cloud Serving Benchmark project. That group compared ScyllaDB to the open source Cassandra database, and found Scylla to be 4.6 times faster than a standard Cassandra cluster.
Writer Mike Wheatley elaborates on the product:
ScyllaDB’s biggest differentiator is that it’s compatible with the Apache Cassandra database APIs. As such, the creators claims that ScyllaDB can be used as a drop-in replacement for Cassandra itself, offering users the benefit of improved performance and scale that comes from the integration with a light key/value store.
The company says the new release is geared towards development teams that have struggled with Big Data projects, and claims a number of performance advantages over more traditional development approach, including:
*10X throughput of baseline Cassandra – more than 1,000,000 CQL operations per second per node
*Sub 1msec 99% latency
*10X per-node storage capacity over Cassandra
*Self-tuning database: zero configuration needed to max out hardware
*Unparalleled high availability, native multi-datacenter awareness
*Drop-in replacement for Cassandra – no additional scripts or code required”
Wheatley cites Scylla’s CTO when he points to better integration with graph databases and improved support for Thrift, Date Tiered Compaction Strategy, Large Partitions, Docker, and CQL tracing. I notice the company is hiring as of this writing. Don’t let the Tel Aviv location of Scylla’s headquarters stop from applying you if you don’t happen to live nearby—they note that their developers can work from anywhere in the world.
Cynthia Murrell, March 8, 2016
February 20, 2017
Analytics are catching up to content. In a recent ZDNet article, Digimind partners with Ditto to add image recognition to social media monitoring, we are reminded images reign supreme on social media. Between Pinterest, Snapchat and Instagram, messages are often conveyed through images as opposed to text. Capitalizing on this, and intelligence software company Digimind has announced a partnership with Ditto Labs to introduce image-recognition technology into their social media monitoring software called Digimind Social. We learned,
The Ditto integration lets brands identify the use of their logos across Twitter no matter the item or context. The detected images are then collected and processed on Digimind Social in the same way textual references, articles, or social media postings are analysed. Logos that are small, obscured, upside down, or in cluttered image montages are recognised. Object and scene recognition means that brands can position their products exactly where there customers are using them. Sentiment is measured by the amount of people in the image and counts how many of them are smiling. It even identifies objects such as bags, cars, car logos, or shoes.
It was only a matter of time before these types of features emerged in social media monitoring. For years now, images have been shown to increase engagement even on platforms that began focused more on text. Will we see more watermarked logos on images? More creative ways to visually identify brands? Both are likely and we will be watching to see what transpires.
Megan Feil, February 20, 2017
February 10, 2017
So far, this has been a booming year for DMCA takedown requests, we learn from TorrentFreak’s article, “Google Wipes Record Breaking Half Billion Pirate Links in 2016.” The number of wiped links has been growing rapidly over the last several years, but is that good or bad news for copyright holders? That depends on whom you ask. Writer Ernesto reveals the results of TorrentFreak’s most recent analysis:
Data analyzed by TorrentFreak reveals that Google recently received its 500 millionth takedown request of 2016. The counter currently [in mid-July] displays more than 523,000,000, which is yet another record. For comparison, last year it took almost the entire year to reach the same milestone. If the numbers continue to go up at the same rate throughout the year, Google will process a billion allegedly infringing links during the whole of 2016, a staggering number.
According to Google roughly 98% of the reported URLs are indeed removed. This means that half a billion links were stripped from search results this year alone. However, according to copyright holders, this is still not enough. Entertainment industry groups such as the RIAA, BPI and MPAA have pointed out repeatedly that many files simply reappear under new URLs.
Indeed; copyright holders continue to call for Google to take stronger measures. For its part, the company insists increased link removals is evidence that its process is working quite well. They issued out an update of their report, “How Google Fights Piracy.” The two sides remain deeply divided, and will likely be at odds for some time. Ernesto tells us some copyright holders are calling for the government to step in. That could be interesting.
Cynthia Murrell, February 10, 2017
February 2, 2017
Want to be an expert searcher? Gizbot shares some tips, complete with screenshots, in their brief write-up, “Here are 5 Tricks To Get Better Google Search Results.” Writer Sneha Saha begins:
To get any information about anything is easy. Just type the keywords on the Google Search engine and you are done. Rather you might just get information that is far more than what you would actually need. However, getting more information than you require is also a little annoying. Searching for the accurate information among the numerous links that Google provides you with is surely a tough task. We at GizBot have come up with a list of effective methods to try out to search the most accurate information on Google in just a few clicks.
Here are the five tricks: Search for synonyms using a tilde symbol; Use an asterisk in place of any word you cannot remember; Include “or” when confused between two options; Use “intitle” to find keywords within a title or “inurl” to find keywords within a URL; Narrow results by including a date range in your query. See the post for details on any of these search tips.
Cynthia Murrell, February 2, 2017
February 1, 2017
The article titled Google Cloud Platform Releases New Database Services, Fighting AWS and Azure for Corporate Customers on GeekWire suggests that Google’s corporate offerings have been weak in the area of database management. Compared to Amazon Web Services and Microsoft Azure, Google is only wading into the somewhat monotonous arena of corporate database needs. The article goes into detail on the offerings,
Cloud SQL, Second Generation, is a service offering instances of the popular MySQL database. It’s most comparable to AWS’s Aurora and SQL Azure, though there are some differences from SQL Azure, so Microsoft allows running a MySQL database on Azure. Google’s Cloud SQL supports MySQL 5.7, point-in-time recovery, automatic storage resizing and one-click failover replicas, the company said. Cloud Bigtable is a NoSQL database, the same one that powers Google’s own search, analytics, maps and Gmail.
The Cloud Bigtable database is made to handle major workloads of 100+ petabytes, and it comes equipped with resources such as Hadoop and Spark. It will be fun to see what happens as Google’s new service offering hits the ground running. How will Amazon and Microsoft react? Will price wars arise? If so, only good can come of it, at least for the corporate consumers.
Chelsea Kerwin, February 1, 2017
February 1, 2017
With all the recent chatter around “fake news,” one researcher has decided to approach the problem scientifically. An article at Fortune reveals “What a Map of the Fake-News Ecosystem Says About the Problem.” Writer Mathew Ingram introduces us to data-journalism expert and professor Jonathan Albright, of Elon University, who has mapped the fake-news ecosystem. Facebook and Google are just unwitting distributors of faux facts; Albright wanted to examine the network of sites putting this stuff out there in the first place. See the article for a description of his methodology; Ingram summarizes the results:
More than anything, the impression one gets from looking at Albright’s network map is that there are some extremely powerful ‘nodes’ or hubs, that propel a lot of the traffic involving fake news. And it also shows an entire universe of sites that many people have probably never heard of. Two of the largest hubs Albright found were a site called Conservapedia—a kind of Wikipedia for the right wing—and another called Rense, both of which got huge amounts of incoming traffic. Other prominent destinations were sites like Breitbart News, DailyCaller and YouTube (the latter possibly as an attempt to monetize their traffic).
Albright said he specifically stayed away from trying to determine what or who is behind the rise of fake news. … He just wanted to try and get a handle on the scope of the problem, as well as a sense of how the various fake-news distribution or creation sites are inter-connected. Albright also wanted to do so with publicly-available data and open-source tools so others could build on it.
Albright also pointed out the folly of speculating on sources of fake news; such guesswork only “adds to the existing noise,” he noted. (Let’s hear it for common sense!) Ingram points out that, armed with Albright’s research, Google, Facebook, and other outlets may be better able to combat the problem.
January 23, 2017
The article titled Microsoft Launches Researcher and Editor in Word, Zoom in PowerPoint on VentureBeat discusses the pros and cons of the new features coming to Office products. Editor is basically a new and improved version of spellcheck that goes beyond typos to report back on wordiness, passive voice, and cliché usage. This is an exciting tool that might put a few proofreaders out of work, but it is hard to see any issues beyond that. The more controversial introduction by Microsoft is Researcher, and the article explains why,
Researcher… will give users a way to find and incorporate additional information from outside sources. This makes it easy to add a quote and even generate proper academic citations for use in papers. Explicit content won’t appear in search results, so you won’t accidentally import it into your work. And you won’t find yourself in some random Wikipedia rabbit hole, because the search for additional information happens in a panel on the right side of your Word document.
Researcher pulls information from the Bing Knowledge Graph to provide writers with relevant connections to their topics. The question is, will users rely on Researcher to fact-check for them, or will they make sure that the suggested source material is appropriate and substantiated? In spite of the lessons of the Republic National Convention, plagiarism can get you into big trouble (in a college classroom, anyway.) It is easy to see student users failing to properly cite or quote the suggested information, unless Researcher also offers help in those activities as well. Is this a good thing, or is it another way to make our children dumber by enabling shortcuts?
Chelsea Kerwin, January 23, 2017
January 18, 2017
Everyone’s New Year’s resolution is usually to lose weight. When January swings around again, that resolution went out the door with the spring-cleaning. Exercise can be a challenge, but you can always exercise your search skills by reading Medium’s article, “Google Search Tricks To Become A Search Power User.” Or at least the article promises to improve your search skills.
Let’s face it, searching on the Web might seem simple, but it requires a little more brainpower than dumping keywords into a search box. Google makes searching easier and is even the Swiss army knife of answering basic questions. The Medium article does go a step further by drawing old school search tips, such as the asterisk, quotes, parentheses, and others. These explanations, however, need to be read more than once to understand how the tools work:
My favorite of all, single word followed by a ‘*’ will do wonders. But yeah this will not narrow your results; still it keeps a wider range of search results. You’ll need to fine tune to find exactly what you want. This way is useful in case when you don’t remember more than a word or two but you still you want to search fully of it.
Having used some of these tips myself, they actually make searching more complicated than taking a little extra time to read the search results. I am surprised that they did not include the traditional Boolean operators that usually work, more or less. Sometimes search tips cause more trouble than they are worth.
Whitney Grace, January 18, 2016
January 18, 2017
Big Data and Cloud Computing were supposed to make things easier for the C-Suites to take billion dollar decisions. But it seems things have started to fall apart.
In an article published by Forbes titled The Data Warehouse Has Failed, Will Cloud Computing Die Next?, the author says:
A company that sells software tools designed to put intelligence controls into data warehousing environments says that traditional data warehousing approaches are flaky. Is this just a platform to spin WhereScape wares, or does Whitehead have a point?
WhereScape, a key player in Data Warehousing is admitting that the buzzwords in the IT industry are fizzing out. The Big Data is being generated, in abundance, but companies still are unsure what to do with the enormous amount of data that their companies produce.
Large corporations who already have invested heavily in Big Data are yet to find any RoIs. As the author points out:
Data led organizations have no idea how good their data is. CEOs have no idea where the data they get actually comes from, who is responsible for it etc. yet they make multi million pound decisions based on it. Big data is making the situation worse not better.
Looks like after 3D-Printing, another buzzword in the tech world, Big Data and Cloud Computing is going to be just a fizzled out buzzword.
Vishal Ingole, January 18, 2017
January 12, 2017
The heavy hand of Chinese censorship has just gotten heavier. The South China Morning Post reports, “All News Stories Must Be Verified, China’s Internet Censor Decrees as it Tightens Grip on Online Media.” The censorship agency now warns websites not to publish news without “proper verification.” Of course, to hear the government tell it, they just wants to cut down on fake news and false information. Reporter Choi Chi-yuk writes:
The instruction, issued by the Cyberspace Administration of China, came only a few days after Xu Lin, formerly the deputy head of the organisation, replaced his boss, Lu Wei, as the top gatekeeper of Chinese internet affairs. Xu is regarded as one of President Xi Jinping’s key supporters.
The cyberspace watchdog said online media could not report any news taken from social media websites without approval. ‘All websites should bear the key responsibility to further streamline the course of reporting and publishing of news, and set up a sound internal monitoring mechanism among all mobile news portals [and the social media chat websites] Weibo or WeChat,’ Xinhua reported the directive as saying. ‘It is forbidden to use hearsay to create news or use conjecture and imagination to distort the facts,’ it said.
We’re told the central agency has directed regional offices to aggressively monitor content and “severely” punish those who post what they consider false news. They also insist that sources be named within posts. Apparently, several popular news portals have been rebuked under the policy, including Sina.com, Ifeng.com, Caijing.com.cn, Qq.com and 163.com.
Cynthia Murrell, January 12, 2017