June 27, 2016
I do not like spyware. Once it is downloaded onto your computer, it is a pain to delete and it even steals personal information. I think it should be illegal to make, but some good comes from spyware if it is in the right hands (ideally). Some companies make and sell spyware to government agencies. One of them is the Hacking Team and they recently had some bad news said Naked Security, “Hacking Team Loses Global License To Sell Spyware.”
You might remember Hacking Team from 2015, when its systems were hacked and 500 gigs of internal, files, emails, and product source code were posted online. The security company has spent the past year trying to repair its reputation, but the Italian Ministry of Economic Development dealt them another blow. The ministry revoked Hacking Team’s “global authorization” to sell its Remote Control System spyware suite to forty-six countries. Hacking Team can still sell within the European Union and expects to receive approval to sell outside the EU.
“MISE told Motherboard that it was aware that in 2015 Hacking Team had exported its products to Malaysia, Egypt, Thailand, Kazakhstan, Vietnam, Lebanon and Brazil.
The ministry explained that “in light of changed political situations” in “one of” those countries, MISE and the Italian Foreign Affairs, Interior and Defense ministries decided Hacking Team would require “specific individual authorization.” Hacking Team maintains that it does not sell its spyware to governments or government agencies where there is “objective evidence or credible concerns” of human rights violations.”
Hacking Team said if they suspect that any of their products were used to caused harm, they immediately suspend support if customers violate the contract terms. Privacy International does not believe that Hacking Team’s self-regulation is enough.
It points to the old argument that software is a tool and humans cause the problems.
June 15, 2016
The Dark Web and deep web can often get misidentified and confused by readers. To take a step back, Trans Union’s blog offers a brief read called, The Dark Web & Your Data: Facts to Know, that helpfully addresses some basic information on these topics. First, a definition of the Dark Web: sites accessible only when a physical computer’s unique IP address is hidden on multiple levels. Specific software is needed to access the Dark Web because that software is needed to encrypt the machine’s IP address. The article continues,
“Certain software programs allow the IP address to be hidden, which provides anonymity as to where, or by whom, the site is hosted. The anonymous nature of the dark web makes it a haven for online criminals selling illegal products and services, as well as a marketplace for stolen data. The dark web is often confused with the “deep web,” the latter of which makes up about 90 percent of the Internet. The deep web consists of sites not reachable by standard search engines, including encrypted networks or password-protected sites like email accounts. The dark web also exists within this space and accounts for approximately less than 1 percent of web content.”
For those not reading news about the Dark Web every day, this seems like a fine piece to help brush up on cybersecurity concerns relevant at the individual user level. Trans Union is on the pulse in educating their clients as banks are an evergreen target for cybercrime and security breaches. It seems the message from this posting to clients can be interpreted as one of the “good luck” variety.
Megan Feil, June 15, 2016
June 8, 2016
Discrimination or wise precaution? Perhaps both? MakeUseOf tells us, “This Is Why Tor Users Are Being Blocked by Major Websites.” A recent study (PDF) by the University of Cambridge; University of California, Berkeley; University College London; and International Computer Science Institute, Berkeley confirms that many sites are actively blocking users who approach through a known Tor exit node. Writer Philip Bates explains:
“Users are finding that they’re faced with a substandard service from some websites, CAPTCHAs and other such nuisances from others, and in further cases, are denied access completely. The researchers argue that this: ‘Degraded service [results in Tor users] effectively being relegated to the role of second-class citizens on the Internet.’ Two good examples of prejudice hosting and content delivery firms are CloudFlare and Akamai — the latter of which either blocks Tor users or, in the case of Macys.com, infinitely redirects. CloudFlare, meanwhile, presents CAPTCHA to prove the user isn’t a malicious bot. It identifies large amounts of traffic from an exit node, then assigns a score to an IP address that determines whether the server has a good or bad reputation. This means that innocent users are treated the same way as those with negative intentions, just because they happen to use the same exit node.”
The article goes on to discuss legitimate reasons users might want the privacy Tor provides, as well as reasons companies feel they must protect their Websites from anonymous users. Bates notes that there is not much one can do about such measures. He does point to Tor’s own Don’t Block Me project, which is working to convince sites to stop blocking people just for using Tor. It is also developing a list of best practices that concerned sites can follow, instead. One site, GameFAQs, has reportedly lifted its block, and CloudFlare may be considering a similar move. Will the momentum build, or must those who protect their online privacy resign themselves to being treated with suspicion?
Cynthia Murrell, June 8, 2016
June 3, 2016
DuckDuckGo, like a number of other online outfits, has a presence on Tor, the gateway to the part of the Internet which is actually pretty small. I read “Tor Switches to DuckDuckGo Search Results by Default.” I learned:
[F]or a while now Disconnect has no access to Google search results anymore which we used in Tor Browser. Disconnect being more a meta search engine which allows users to choose between different search providers fell back to delivering Bing search results which were basically unacceptable quality-wise. While Disconnect is still trying to fix the situation we asked them to change the fallback to DuckDuckGo as their search results are strictly better than the ones Bing delivers.
The privacy issue looms large. The write up points out:
…DuckDuckGo made a $25,000 donation to Tor which in recent times has been trying to diversify its funding away from reliance on the US government — including launching a crowdfunding campaign which pulled in just over $200,000 at the start of this year.
How private is Tor? No information about this topic appears in the write up.
Stephen E Arnold, June 3, 2016
May 30, 2016
I read “Did Google’s NHS Patient Data Deal Need Ethical Approval?” As I thought about the headline, my reaction was typically Kentucky, “Is this mom talking or what?”
The write up states:
Now, a New Scientist investigation has found that Google DeepMind deployed a medical app called Streams for monitoring kidney conditions without first contacting the relevant regulatory authority. Our investigation also asks whether an ethical approval process that covers this kind of data transfer should have been obtained, and raises questions about the basis under which Royal Free is sharing data with Google DeepMind.
I hear, “Did you clean up your room, dear?”
The notion of mining data has some charm among some folks in the UK. The opportunity to get a leg up on other outfits has some appeal to the Alphabet Google crowd.
The issue is, “Now that the horse has left the barn, what do we do about it?” Good question if you are a mom type. Ask any teenager about Friday night. Guess what you are likely to learn.
The write up continues:
Minutes from the Royal Free’s board meeting on 6 April make the trust’s relationship with DeepMind explicit: “The board had agreed to enter into a memorandum of understanding with Google DeepMind to form a strategic partnership to develop transformational analytics and artificial intelligence healthcare products building on work currently underway on an acute kidney failure application.” When New Scientist asked for a copy of the memorandum of understanding on 9 May, Royal Free pushed the request into a Freedom of Information Act request.
I recall a statement made by a US official. It may be germane to this question about medical data. The statement: “What we say is secret is secret.” Perhaps this applies to the matter in question.
I circled this passage:
The HRA confirmed to New Scientist that DeepMind had not started the approval process as of 11 May. “Google is getting data from a hospital without consent or ethical approval,” claims Smith. “There are ethical processes around what data can be used for, and for a good reason.”
And Alphabet Google’s point of view? I highlighted this paragraph:
“Section 251 assent is not required in this case,” Google said in a statement to New Scientist. “All the identifiable data under this agreement can only ever be used to assist clinicians with direct patient care and can never be used for research.”
I don’t want to draw any comparisons between the thought processes in some Silicon Valley circles and the Silicon Fen. Some questions:
- Where is that horse?
- Who owns the horse?
- What secondary products have been created from the horse?
My inner voice is saying, “Hit the butcher specializing in horse meat maybe.”
Stephen E Arnold, May 30, 2016
May 30, 2016
One of the fears of automation is that human workers will be replaced and there will no longer be any more jobs for humanity. Blue-collar jobs are believed to be the first jobs that will be automated, but bankers, financial advisors, and other workers in the financial industry have cause to worry. Algorithms might replace them, because apparently people are getting faster and better responses from automated bank “workers”.
Perhaps one of the reasons why bankers and financial advisors are being replaced is due to their sudden understanding that “Big Data And Predictive Analytics: A Big Deal, Indeed” says ABA Banking Journal. One would think that the financial sector would be the first to embrace big data and analytics in order to keep an upper hand on their competition, earn more money, and maintain their relevancy in an ever-changing world. They, however, have been slow to adapt, slower than retail, search, and insurance.
One of the main reasons the financial district has been holding back is:
“There’s a host of reasons why banks have held back spending on analytics, including privacy concerns and the cost for systems and past merger integrations. Analytics also competes with other areas in tech spending; banks rank digital banking channel development and omnichannel delivery as greater technology priorities, according to Celent.”
After the above quote, the article makes a statement about how customers are moving more to online banking over visiting branches, but it is a very insipid observation. Big data and analytics offer the banks the opportunity to invest in developing better relationships with their customers and even offering more individualized services as a way to one up Silicon Valley competition. Big data also helps financial institutions comply with banking laws and standards to avoid violations.
Banks do need to play catch up, but this is probably a lot of moan and groan for nothing. The financial industry will adapt, especially when they are at risk of losing more money. This will be the same for all industries, adapt or get left behind. The further we move from the twentieth century and generations that are not used to digital environments, the more we will see technology integration.
May 27, 2016
Open source software is an excellent idea, because it allows programmers across the globe to share and contribute to the same project. It also creates a think tank like environment that can be applied (arguably) to any tech field. There is a downside to open source and creative commons software and that is it not a sustainable model. Open Source Everything For The 21st Century discusses the issue in their post about “Robert Steele: Should Open Source Code Have A PayPal Address & AON Sliding Scale Rate Sheet?”
The post explains that open source delivers an unclear message about how code is generated, it comes from the greater whole rather than a few people. It also is not sustainable, because people do need funds to survive as well as maintain the open source software. Fair Source is a reasonable solution: users are charged if the software is used at a company with fifteen or more employees, but it too is not sustainable.
Micro-payments, small payments of a few cents, might be the ultimate solution. Robert Steele wrote that:
“I see the need for bits of code to have embedded within them both a PayPalPayPal-like address able to handle micro-payments (fractions of a cent), and a CISCO-like Application Oriented Network (AON) rules and rate sheet that can be updated globally with financial-level latency (which is to say, instantly) and full transparency. Some standards should be set for payment scales, e.g. 10 employees, 100, 1000 and up; such that a package of code with X number of coders will automatically begin to generate PayPal payments to the individual coders when the package hits N use cases within Z organizational or network structures.”
Micro-payments are not a bad idea and it has occasionally been put into practice, but not very widespread. No one has really pioneered an effective system for it.
Steele is also an advocate for “…Internet access and individual access to code is a human right, devising new rules for a sharing economy in which code is a cost of doing business at a fractional level in comparison to legacy proprietary code — between 1% and 10% of what is paid now.”
It is the ideal version of the Internet, where people are able to make money from their content and creations, users’ privacy is maintained, and ethics is essential are respected. The current trouble with YouTube channels and copyright comes to mind as does stolen information sold on the Dark Web and the desire to eradicate online bullying.
May 25, 2016
Some of us consider the movie Minority Report to be a cautionary tale, but apparently the Chinese government sees it as more of good suggestion. According to eTeknix, that country seems to be planning a crime-prediction unit similar to the one in the movie, except this one will use algorithms instead of psychics. We learn about the initiative from the brief write-up, “China Creating ‘Precrime’ System.” Writer Gareth Andrews informs us:
“The movie Minority Report posed an interesting question to people: if you knew that someone was going to commit a crime, would you be able to charge them for it before it even happens? If we knew you were going to pirate a video game when it goes online, does that mean we can charge you for stealing the game before you’ve even done it?
“China is looking to do just that by creating a ‘unified information environment’ where every piece of information about you would tell the authorities just what you normally do. Decide you want to something today and it could be an indication that you are about to commit or already have committed a criminal activity.
“With machine learning and artificial intelligence being at the core of the project, predicting your activities and finding something which ‘deviates from the norm’ can be difficult for even a person to figure out. When the new system goes live, being flagged up to the authorities would be as simple as making a few purchases online….”
Indeed. Today’s tech is being used to gradually erode privacy rights around the world, all in the name of security. There is a scene in that Minority Report that has stuck with me: Citizens in an apartment building are shown pausing their activities to passively accept the intrusion of spider-like spy-bots into their homes, upon their very faces even, then resuming where they left off as if such an incursion were perfectly normal. If we do not pay attention, one day it may become so.
Cynthia Murrell, May 25, 2016
May 24, 2016
The article on GlobeNewsWire titled Ex-Googler Startup DGraph Labs Raises US$1.1 Million in Seed Funding Round to Build Industry’s First Open Source, Native and Distributed Graph Database names Bain Capital Ventures and Blackbird Ventures as the main investors in the startup. Manish Jain, founder and CEO of DGraph, worked on Google’s Knowledge Graph Infrastructure for six years. He explains the technology,
“Graph data structures store objects and the relationships between them. In these data structures, the relationship is as important as the object. Graph databases are, therefore, designed to store the relationships as first class citizens… Accessing those connections is an efficient, constant-time operation that allows you to traverse millions of objects quickly. Many companies including Google, Facebook, Twitter, eBay, LinkedIn and Dropbox use graph databases to power their smart search engines and newsfeeds.”
Among the many applications of graph databases, the internet of thing, behavior analysis, medical and DNA research, and AI are included. So what is DGraph going to do with their fresh funds? Jain wants to focus on forging a talented team of engineers and developing the company’s core technology. He notes in the article that this sort of work is hardly the typical obstacle faced by a startup, but rather the focus of major tech companies like Google or Facebook.
Chelsea Kerwin, May 24, 2016
May 18, 2016
A recent report from SimilarWeb tells us what sorts of people turn to Internet search engine DuckDuckGo, which protects users’ privacy, over a more prominent engine, Microsoft’s Bing. The Search Engine Journal summarizes the results in, “New Research Reveals Who is Using DuckDuckGo and Why.”
The study drew its conclusions by looking at the top five destinations of DuckDuckGo users: Whitehatsec.com, Github.com, NYtimes.com, 4chan.org, and YCombinator.com. Note that four of these five sites have pretty specific audiences, and compare them to the top five, more widely used, sites accessed through Bing: MSN.com, Amazon.com, Reddit.com, Google.com, and Baidu.com.
Writer Matt Southern observes:
“DuckDuckGo users also like to engage with their search engine of choice for longer periods of time — averaging 9.38 minutes spent on DuckDuckGo vs. Bing.
“Despite its growth over the past year, DuckDuckGo faces a considerable challenge when it comes to getting found by new users. Data shows the people using DuckDuckGo are those who already know about the search engine, with 93% of its traffic coming from direct visits. Only 1.5% of its traffic comes from organic search.
“Roy Hinkis of SimilarWeb concludes by saying the loyal users of DuckDuckGo are those who love tech, and they use they use DuckDuckGo as an alternative because they’re concerned about having their privacy protected while they search online.”
Though Southern agrees DuckDuckGo needs to do some targeted marketing, he notes traffic to the site has been rising by 22% per year. It is telling that the privacy-protecting engine is most popular among those who understand the technology.
Cynthia Murrell, May 18, 2016