Google Popular Times Now in Real Time
January 20, 2017
Just a quick honk about a little Google feature called Popular Times. LifeHacker points out an improvement to the tool in, “Google Will Now Show You How Busy a Business Is in Real Time.” To help users determine the most efficient time to shop or dine, the feature already provided a general assessment of businesses’ busiest times. Now, though, it bases that information on real-time metrics. Writer Thorin Klosowski specifies:
The real time data is rolling out starting today. You’ll see that it’s active if you see a ‘Live’ box next to the popular times when you search for a business. The data is based on location data and search terms, so it’s not perfect, but will at least give you a decent idea of whether or not you’ll easily find a place to sit at a bar or how packed a store might be. Alongside the real-time data comes some other info, including how long people stay at a location on average and hours by department, which is handy when a department like a pharmacy or deli close earlier the rest of a store.
Just one more way Google tries to make life a little easier for its users. That using it provides Google with even more free, valuable data is just a side effect, I’m sure.
Cynthia Murrell, January 20, 2017
The Internet Is Once Again Anonymous
January 19, 2017
Let us reminiscence for a moment (and if you like you can visit the Internet archive) about the Internet’s early days, circa late 1990s. It was a magic time, because there were chatrooms, instant messaging, and forums. The Internet has not changed these forms of communication much, although chatrooms are pretty dead, but one great thing about the early days is that the Internet was mostly anonymous. With the increase in tracking software, IP awareness, and social media, Internet anonymity is reserved for the few who are vigilant and never post anything online. Sometimes, however, you want to interact online without repercussions and TechCrunch shares that “Secret Founder Returns To Anonymous Publishing With Launch Of IO.”
David Byttow, Secret co-founder, started the anonymous publishing app IO that is similar to Postcard Confessions. IO’s purpose is to:
IO is a pseudo-resurrection of Secret that Byttow told us in November came into being partly because “the downsides of current social media products MUST be addressed,” an imperative he felt was especially urgent following the results of the last U.S. election. IO’s stated mission is to achieve “authentic publishing,” by which Byttow means that he’s hoping users having an option to publishing either anonymously, using a pseudonym or as their actual selves will allow for easier sharing of true thoughts and feelings.
IO really does not do much. You can type something up, hit publish, but it is only shared with other people if you attach social media links. You can remain anonymous and IO does include writing assistance tools. I really do not get why IO is useful, but it does allow a person to create a shareable link without joining a forum, owning a Web site, etc. Reddit seems more practical, though.
Whitney Grace, January 19, 2016
Chinese Censorship Agency Declares All News Online Must Be Verified
January 12, 2017
The heavy hand of Chinese censorship has just gotten heavier. The South China Morning Post reports, “All News Stories Must Be Verified, China’s Internet Censor Decrees as it Tightens Grip on Online Media.” The censorship agency now warns websites not to publish news without “proper verification.” Of course, to hear the government tell it, they just wants to cut down on fake news and false information. Reporter Choi Chi-yuk writes:
The instruction, issued by the Cyberspace Administration of China, came only a few days after Xu Lin, formerly the deputy head of the organisation, replaced his boss, Lu Wei, as the top gatekeeper of Chinese internet affairs. Xu is regarded as one of President Xi Jinping’s key supporters.
The cyberspace watchdog said online media could not report any news taken from social media websites without approval. ‘All websites should bear the key responsibility to further streamline the course of reporting and publishing of news, and set up a sound internal monitoring mechanism among all mobile news portals [and the social media chat websites] Weibo or WeChat,’ Xinhua reported the directive as saying. ‘It is forbidden to use hearsay to create news or use conjecture and imagination to distort the facts,’ it said.
We’re told the central agency has directed regional offices to aggressively monitor content and “severely” punish those who post what they consider false news. They also insist that sources be named within posts. Apparently, several popular news portals have been rebuked under the policy, including Sina.com, Ifeng.com, Caijing.com.cn, Qq.com and 163.com.
Cynthia Murrell, January 12, 2017
Linux Users Can Safely Test Alpha Stage Tor Browser
January 5, 2017
The Tor Project has released the Alpha version of Tor Browser exclusive to Linux that users can test and use in sandboxed mode.
As reported by Bleeping Computer in article titled First Version of Sandboxed Tor Browser Available:
Sandboxing is a security mechanism employed to separate running processes. In computer security, sandboxing an application means separating its process from the OS, so vulnerabilities in that app can’t be leveraged to extend access to the underlying operating system.
As the browser that’s still under development is open to vulnerabilities, these loopholes can be used by competent parties to track down individuals. Sandboxing eliminates this possibility completely. The article further states that:
In recent years, Tor exploits have been deployed in order to identify and catch crooks hiding their identity using Tor. The Tor Project knows that these types of exploits can be used for other actions besides catching pedophiles and drug dealers. An exploit that unmasks Tor users can be very easily used to identify political dissidents or journalists investigating cases of corrupt politicians.
The Tor Project has been trying earnestly to close these loopholes and this seems to be one of their efforts to help netizens stay safe from prying eyes. But again, no system is full-proof. As soon as the new version is released, another exploit might follow suit.
Vishal Ingole, January 5, 2017
Malicious Tor Relays on over a Hundred Computers
January 4, 2017
For all the effort enterprises go to in securing data through technological solutions, there are also other variables to consider: employees. Ars Technica released an article, Malicious computers caught snooping on Tor-anonymized Dark Web sites, which explained malicious relays were found on over 110 machines around the world. Computer scientists at Northeastern University tracked these computers using honeypot.onion addresses, calling them “honions.” The article continues,
The research is only the latest indication that Tor can’t automatically guarantee the anonymity of hidden services or the people visiting them. Last year, FBI agents cracked open a Tor-hidden child pornography website using a technique that remains undisclosed to this day. In 2014, researchers canceled a security conference talk demonstrating a low-cost way to de-anonymize Tor users following requests by attorneys from Carnegie Mellon, where the researchers were employed. Tor developers have since fixed the weakness that made the exploit possible. More than 70 percent of the snooping hidden services directories were hosted on cloud services, making it hard for most outsiders to identify the operators.
While some may wonder if the snooping is a result of a technical glitch or other error, the article suggests this is not the case. Researchers found that in order for a directory to misbehave in this way, an operator has to change the code from Tor and add logging capabilities. It appears the impact this will have is yet to be fully revealed.
Megan Feil, January 4, 2017
Tor Anonymity Not 100 Percent Guaranteed
January 1, 2017
An article at Naked Security reveals some information turned up by innovative Tor-exploring hidden services in its article, “‘Honey Onions’ Probe the Dark Web: At Least 3% of Tor Nodes are Rogues.” By “rogues,” writer Paul Ducklin is referring to sites, run by criminals and law-enforcement alike, that are able to track users through Tor entry and/or exit nodes. The article nicely lays out how this small fraction of sites can capture IP addresses, so see the article for that explanation. As Ducklin notes, three percent is a small enough window that someone just wishing to avoid having their shopping research tracked may remain unconcerned, but is a bigger matter for, say, a journalist investigating events in a war-torn nation. He writes:
Two researchers from Northeastern University in Boston, Massachussets, recently tried to measure just how many rogue HSDir nodes there might be, out of the 3000 or more scattered around the world. Detecting that there are rogue nodes is fairly easy: publish a hidden service, tell no one about it except a minimum set of HSDir nodes, and wait for web requests to come in.[…]
With 1500 specially-created hidden services, amusingly called ‘Honey Onions,’ or just Honions, deployed over about two months, the researchers measured 40,000 requests that they assume came from one or more rogue nodes. (Only HSDir nodes ever knew the name of each Honion, so the researchers could assume that all connections must have been initiated by a rogue node.) Thanks to some clever mathematics about who knew what about which Honions at what time, they calculated that these rogue requests came from at least 110 different HSDir nodes in the Tor network.
It is worth noting that many of those requests were simple pings, but others were actively seeking vulnerabilities. So, if you are doing anything more sensitive than comparing furniture prices, you’ll have to decide whether you want to take that three percent risk. Ducklin concludes by recommending added security measures for anyone concerned.
Cynthia Murrell, January 1, 2017
Cybersecurity Technologies Fueled by Artificial Intelligence
December 28, 2016
With terms like virus being staples in the cybersecurity realm, it is no surprise the human immune system is the inspiration for the technology fueling one relatively new digital threat defense startup. In the Tech Republic article, Darktrace bolsters machine learning-based security tools to automatically attack threats, more details and context about Darktrace’s technology and positioning was revealed. Founded in 2013, Darktrace recently announced they raised $65 million to help fund their expansion globally. Four products, including their basic cyber threat defense solution called Darktrace, comprise their product suite. The article expands on their offerings:
Darktrace also offers its Darktrace Threat Visualizer, which provides analysts and CXOs with a high-level, global view of their enterprise. Darktrace Antigena complements the core Darktrace product by automatically defends against potential threats that have been detected, acting as digital “antibodies.” Finally, the Industrial Immune System is a version of Darktrace designed for Industrial Control Systems (ICS). The key value provided by Darktrace is the fact that it relies on unsupervised machine learning, and it is able to detect threats on its own without much human interaction.
We echo this article’s takeaway that machine learning and other artificial intelligence technologies continue to grow in the cybersecurity sector. The attention on AI is only building in this industry and others. Perhaps the lack of AI is particularly well-suited to cybersecurity as it’s behind-the-scenes nature that of Dark Web related crimes.
Megan Feil, December 28, 2016
Now Watson Wants to Be a Judge
December 27, 2016
IBM has deployed Watson in many fields, including the culinary arts, sports, and medicine. The big data supercomputer can be used in any field or industry that creates a lot of data. Watson, in turn, will digest the data, and depending on the algorithms spit out results. Now IBM wants Watson to take on the daunting task of judging, says The Drum in “Can Watson Pick A Cannes Lion Winner? IBM’s Cognitive System Tries Its Arm At Judging Awards.”
According to the article, judging is a cognitive process and requires special algorithms, not the mention the bias of certain judges. In other words, it should be right up Watson’s alley (perhaps the results will be less subjective as well). The Drum decided to put Watson to the ultimate creative test and fed Watson thousands of previous Cannes films. Then Watson predicted who would win the Cannes Film Festival in the Outdoor category this year.
This could change the way contests are judged:
The Drum’s magazine editor Thomas O’Neill added: “This is an experiment that could massively disrupt the awards industry. We have the potential here of AI being able to identify an award winning ad from a loser before you’ve even bothered splashing out on the entry fee. We’re looking forward to seeing whether it proves as accurate in reality as it did in training.
I would really like to see this applied to the Academy Awards that are often criticized for their lack of diversity and consisting of older, white men. It would be great to see if Watson would yield different results that what the Academy actually selects.
Whitney Grace, December 27, 2016
Shorter Content Means Death for Scientific Articles
December 26, 2016
The digital age is a culture that subsists on digesting quick bits of information before moving onto the next. Scientific journals are hardly the herald of popular trends, but in order to maintain relevancy with audiences the journals are pushing for shorter articles. The shorter articles, however, presents a problem for the authors says Ars Technica in the, “Scientific Publishers Are Killing Research Papers.”
Shorter articles are also pushed because scientific journals have limited pages to print. The journals are also pressured to include results and conclusions over methods to keep the articles short. The methods, in fact, are usually published in another publication labeled supplementary information:
Supplementary information doesn’t come in the print version of journals, so good luck understanding a paper if you like reading the hard copy. Neither is it attached to the paper if you download it for reading later—supplementary information is typically a separate download, sometimes much larger than the paper itself, and often paywalled. So if you want to download a study’s methods, you have to be on a campus with access to the journal, use your institutional proxy, or jump through whatever hoops are required.
The lack of methodical information can hurt researchers who rely on the extra facts to see if it is relevant to their own work. The shortened articles also reference the supplementary materials and without them it can be hard to understand the published results. The shorter scientific articles may be better for general interest, but if they lack significant information than how can general audiences understand them?
In short, the supplementary material should be included online and should be easily accessed.
Whitney Grace, December 26, 2016
Costs of the Cloud
December 15, 2016
The cloud was supposed to save organizations a bundle on servers, but now we learn from Datamation that “Enterprises Struggle with Managing Cloud Costs.” The article cites a recent report from Dimensional Research and cloud-financial-management firm Cloud Cruiser, which tells us, for one thing, that 92 percent of organizations surveyed now use the cloud. Researchers polled 189 IT pros at Amazon Web Services (AWS) Global Summit in Chicago this past April, where they also found that 95 percent of respondents expect their cloud usage to expand over the next year.
However, organizations may wish to pause and reconsider their approach before throwing more money at cloud systems. Writer Pedro Hernandez reports:
Most organizations are suffering from a massive blind spot when it comes to budgeting for their public cloud services and making certain they are getting their money’s worth. Nearly a third of respondents said that they aren’t proactively managing cloud spend and usage, the study found. A whopping 82 percent said they encountered difficulties reconciling bills for cloud services with their finance departments.
The top challenge with the continuously growing public cloud resource is the ability to manage allocation usage and costs,’ stated the report. ‘IT and Finance continue to have difficulty working together to ascertain and allocate public cloud usage, and IT continues to struggle with technologies that will gather and track public cloud usage information.’ …
David Gehringer, principal at Dimensional Research, believes it’s time for enterprises to quit treating the cloud differently and adopt IT monitoring and cost-control measures similar to those used in their own data centers.
The report also found that top priorities for respondents included cost and reporting at 54 percent, performance management at 46 percent, and resource optimization at 45 percent. It also found that cloudy demand is driven by application development and testing, at 59 percent, and big data/ analytics at 31 percent.
The cloud is no longer a shiny new invention, but rather an integral part of most organizations. We would do well to approach its management and funding as we would other resource. The original report is available, with registration, here.
Cynthia Murrell, December 15, 2016