CIA Adapts to Cyber Reality

January 5, 2017

It would be quite the understatement to say the Internet had drastically changed the spy business. The evolution comes with its ups and downs, we learn from the article, “CIA Cyber Official Sees Data Flood as Both Godsend and Danger” at the Stars and Stripes. Reporter Nafeesa Syeed cites an interview with Sean Roche, the CIA’s associate deputy director for digital innovation. The article informs us:

A career CIA official, Roche joined the agency’s new Directorate for Digital Innovation, which opened in October, after serving as deputy director for science and technology.[…]

Roche’s division was the first directorate the CIA added in half a century. His responsibilities include updating the agency’s older systems, which aren’t compatible with current technology and in some cases can’t even accommodate encryption. The directorate also combined those handling the agency’s information technology and internet systems with the team that monitors global cyber threats. ‘We get very good insights into what the cyber actors are doing and we stop them before they get to our door,’ Roche said.

Apparently, finding tech talent has not been a problem for the high-profile agency. In fact, Syeed tells us, many agents who had moved on to the IT industry are returning, in senior positions, armed with their cyber experience. Much new talent is also attracted by the idea of CIA caché. Roche also asserts he is working to boost ethnic diversity in the CIA by working with organizations that encourage minorities to pursue work in technical fields. What a good, proactive idea! Perhaps Roche would consider also working with groups that promote gender equity in STEM fields.

In case you are curious, Roche’s list of the top nations threatening our cybersecurity includes Russia, China, Iran, and North Korea. No surprises there.

Cynthia Murrell, January 5, 2017

Linux Users Can Safely Test Alpha Stage Tor Browser

January 5, 2017

The Tor Project has released the Alpha version of Tor Browser exclusive to Linux that users can test and use in sandboxed mode.

As reported by Bleeping Computer in article titled First Version of Sandboxed Tor Browser Available:

Sandboxing is a security mechanism employed to separate running processes. In computer security, sandboxing an application means separating its process from the OS, so vulnerabilities in that app can’t be leveraged to extend access to the underlying operating system.

As the browser that’s still under development is open to vulnerabilities, these loopholes can be used by competent parties to track down individuals. Sandboxing eliminates this possibility completely. The article further states that:

In recent years, Tor exploits have been deployed in order to identify and catch crooks hiding their identity using Tor. The Tor Project knows that these types of exploits can be used for other actions besides catching pedophiles and drug dealers. An exploit that unmasks Tor users can be very easily used to identify political dissidents or journalists investigating cases of corrupt politicians.

The Tor Project has been trying earnestly to close these loopholes and this seems to be one of their efforts to help netizens stay safe from prying eyes. But again, no system is full-proof. As soon as the new version is released, another exploit might follow suit.

Vishal Ingole, January 5, 2017

Blippar: Your Phone May Recognize You, Not Just a Product

January 4, 2017

I read “Blippar AI Visual Search Engine Recognizes Faces in Real Time.” The main point of the write up is that you can point your phone at something, and the phone will recognize that thing or person. The flip side is that if your phone has a camera which can see you, your phone makes it easy for “someone” to recognize you. Isn’t that special? Blippar info is at this link.

I learned:

Blippar expanded its augmented reality visual search browser on Tuesday to recognize faces in real time with a simple smartphone camera and return information about that person.

The write up talks about how consumers will drool over this feature. My thought was, “Gee, wouldn’t that function be useful for surveillance purposes?”

The write up included this statement:

The feature allows users to point the camera phone at any real person or their image in a picture on television and the Blippar app returns information about the person from the company’s database filled with more than three billion facts. Real-time facial recognition is the latest tool, amidst expansion in artificial intelligence and deep-learning capabilities.

Yep. Just another “tool.”

Blippar includes a feature for humans who want to be recognized:

For public figures, their faces will be automatically discovered with information drawn from Blipparsphere, the company’s visual knowledge Graph that pulls information from publicly accessible sources, which was released earlier this year. Public figures can also set up their own AR Face Profile. The tool enables them to engage with their fans and to communicate information that is important to them by leveraging their most personal brand — their face.  Users also can create fact profiles — Augmented Reality profiles on someone’s face, which users create so they can express who they are visually.Users can view each other’s profiles that have been uploaded and published and can add pictures or YouTube videos, as well as AR moods and much more to express themselves in the moment.

Why not convert existing images to tokens or hashes and then just match faces? Maybe not. Who would want to do this to sell toothpaste?

Stephen E Arnold, January 4, 2017

Malicious Tor Relays on over a Hundred Computers

January 4, 2017

For all the effort enterprises go to in securing data through technological solutions, there are also other variables to consider: employees. Ars Technica released an article, Malicious computers caught snooping on Tor-anonymized Dark Web sites, which explained malicious relays were found on over 110 machines around the world. Computer scientists at Northeastern University tracked these computers using honeypot.onion addresses, calling them “honions.” The article continues,

The research is only the latest indication that Tor can’t automatically guarantee the anonymity of hidden services or the people visiting them. Last year, FBI agents cracked open a Tor-hidden child pornography website using a technique that remains undisclosed to this day. In 2014, researchers canceled a security conference talk demonstrating a low-cost way to de-anonymize Tor users following requests by attorneys from Carnegie Mellon, where the researchers were employed. Tor developers have since fixed the weakness that made the exploit possible. More than 70 percent of the snooping hidden services directories were hosted on cloud services, making it hard for most outsiders to identify the operators.

While some may wonder if the snooping is a result of a technical glitch or other error, the article suggests this is not the case. Researchers found that in order for a directory to misbehave in this way, an operator has to change the code from Tor and add logging capabilities. It appears the impact this will have is yet to be fully revealed. 

Megan Feil, January 4, 2017

Is Your Data up for Sale on Dark Web?

January 4, 2017

A new service has been launched in UK that enables users to find out if their confidential information is up for sale over the Dark Web.

As reported by Hacked in an article This Tool Lets You Scan the Dark Web for Your (Stolen) Personal Data, it says:

The service is called OwlDetect and is available for £3,5 a month. It allows users to scan the dark web in search for their own leaked information. This includes email addresses, credit card information and bank details.

The service uses a supposedly sophisticated algorithm that has alleged capabilities to penetrate up to 95% of content on the Dark Web. The inability of Open Web search engines to index and penetrate Dark Web has led to mushrooming of Dark Web search engines.

OwlDetect works very similar to early stage Google, as it becomes apparent here in the article:

This new service has a database of stolen data. This database was created over the past 10 years, presumably with the help of their software and team. A real deep web search engine does exist, however.

This means the search is not real time and is as good as searching your local hard drive. Most of the data might be outdated and companies that owned this data might have migrated to secure platforms. Moreover, the user might also have deleted the old data. Thus, the service just tells you that were you ever hacked or was your data was even stolen?

Vishal Ingole,  January 4, 2017

Nagging for Google for Relevance Ranking Secrets

January 3, 2017

I read “Good Luck in Making Google Reveal Its Algorithm.” The title is incorrect. I think the word I expected was “algorithms and administrative interfaces.” The guts of Google’s PageRank system appear in the PageRank patent assigned to the Stanford Board of Directors. Because the “research” for PageRank is based in part on a US government grant, the PageRank method discloses the basic approach of the Google. If one looks at the “references” to other work, one will find mentions of Eugene Garfield (the original citation value wizard), the IBM Almaden Clever team, and a number of other researchers and inventors who devised a way to figure out what’s important in the context of linked information.

What folks ignore is that it is expensive to reengineer the algorithmic plumbing at an outfit like Google. Think in terms of Volkswagen rewriting its emissions code and rebuilding its manufacturing plants to produce non cheating vehicles. That’s the same problem the Google has faced but magnified by the rate at which changes have been required to keep the world’s most loved Web search system [a] working, [b] ahead of the spoofers who can manipulate Mother Google’s relevance ranking, [c] diverse content including videos and the social Plus stuff, and [d] mobile.

The result is that Google has taken its Airstream trailer and essentially added tailfins, solar panels, and new appliances; that is, the equivalent of a modern microwave instead of the old, inefficient toaster oven. But the point is that the Google Airstream is still an Airstream just “new and improved.”

The net net is that Google itself cannot easily explain what happens within the 15 years and ageing fast relevance Airstream. Outsiders essentially put up content, fiddle with whatever controls are available, and then wait to see what happens when one runs a query for the content.

The folks driving the Ford F-150 pulling the trailer have controls in the truck. The truck has a dashboard. The truck has extras. The truck has an engine. The entire multi part assemble is the Google search system.

The point is that Google’s algorithm is not ONE THING. It is a highly complex system, and there are not many people around who know the entire thing. The fact that it works is great. Sometimes, however, the folks driving the Ford F 150 have to fiddle with the dials and knobs. That administrative control panel is hooked to some parts of the gear in the Airstream. Other dials just do things to deal with what is happening right now. Love bugs make it hard to see out of the windscreen, so the driver squirts bug remover fluid and turns on the windshield wipers. The Airstream stuff comes along for the ride.

The article cited above explains that Google won’t tell a German whoop-de-doo how it works. Well, the author has got the “won’t tell” part right. Even if Google wanted to explain how its “algorithm” works, the company would probably just point to a stack of patents and journal articles and say, “There you go.”

The write up states:

We know that search results – and social media news feeds – are assembled by algorithms that determine the websites or news items likely to be most “relevant” for each user. The criteria used for determining relevance are many and varied, but some are calibrated by what your digital trail reveals about your interests and social network and, in that sense, the search results or news items that appear in your feed are personalized for you. But these powerful algorithms, which can indeed shape how you see the world, are proprietary and secret, which is wrong. So, Merkel argues, they should be less opaque.

The article also is correct when it says:

So just publishing secret stuff doesn’t do the trick. In a way, this is the hard lesson that WikiLeaks learned.

The write up uses Google as a whipping post. The issue is not math. The issue is the gap between those who use methods that are “obvious” and those who look for fuzzy solutions. Why not focus on other companies which use “obvious” systems and methods? Answer: Google is a big, fat, slow moving, predictable, ageing target.

Convenient for real journalists. Oh, 89 percent of this rare species does their research via Google, clueless about how the sausage is made. Grab those open source documents and start reading.

Stephen E Arnold, January 4, 2016

First God, Then History, and Now Prediction: All Dead Like Marley

January 3, 2017

I read a chunk of what looks to me like content marketing called “The Death of Prediction.” Prediction seems like a soft target. There were the polls which made clear that Donald J. Trump was a loser. Well, how did that work out? For some technology titans, the predictions omitted a grim pilgrimage to Trump Tower to meet the deal maker in person. Happy faces? Not so many, judging from the snaps of the Sillycon Valley crowd and one sycophant from Armonk.

The write up points out that predictive analytics are history. The future is “explanatory analytics.” An outfit called Quantifind has figured out that explaining is better than predicting. My hunch is that explaining is little less risky. Saying that the Donald would lose is tough to explain when the Donald allegedly “won.”

Explaining is like looser. The black-white, one-two, or yes-no thing is a bit less gelatinous.

So what’s the explainer explaining? The checklist is interesting:

  1. Alert me when it matters. The idea is that a system or smart software will proactively note when something important happens and send one of those mobile phone icon things to get a human to shift attention to the new thing. Nothing like distraction I say.
  2. Explore why on one’s own. Yep, this works really well for spelunkers who find themselves trapped. Exploration is okay, but it is helpful to [a] know where one is, [b] know where one is going, and [c] know the territory. Caves can be killers, not just dark and damp.
  3. Quantify impact in “real” dollars. The notion of quantifying strikes me as important. But doesn’t one quantify to determine if the prediction were on the money. I sniff a bit of flaming contradiction. The notion of knowing something in real time is good too. Now the problem becomes, “What’s real time?” I have tilled this field before and saying “real time” is different from delivering what one expects and what the system can do and what the outfit can afford.

It’s not even 2017, and I have learned that “prediction” is dead. I hope someone tells the folks at Recorded Future and Palantir Technologies. Will they listen?

Buzzwording with cacaphones is definitely alive and kicking.

Stephen E Arnold, January 3, 2017

US Patent Search Has a Ways to Go

January 3, 2017

A recent report was released by the U.S. Government Accountability Office entitled Patent Office Should Strengthen Search Capabilities and Better Monitor Examiners’ Work. Published on June 30, 2016, the report totals 91 pages in the form of a PDF. Included in the report is an examination by the U.S. Patent and Trademark Office (USPTO) of the challenges in identifying relevant information to an existing claimed invention that effect patent search. The website says the following in regards to the reason for this study,

GAO was asked to identify ways to improve patent quality through use of the best available prior art. This report (1) describes the challenges examiners face in identifying relevant prior art, (2) describes how selected foreign patent offices have addressed challenges in identifying relevant prior art, and (3) assesses the extent to which USPTO has taken steps to address challenges in identifying relevant prior art. GAO surveyed a generalizable stratified random sample of USPTO examiners with an 80 percent response rate; interviewed experts active in the field, including patent holders, attorneys, and academics; interviewed officials from USPTO and similarly sized foreign patent offices, and other knowledgeable stakeholders; and reviewed USPTO documents and relevant laws.

In short, the state of patent search is currently not very good. Timeliness and accuracy continue to be concerned when it comes to providing effective search in any capacity. Based on the study’s findings, it appears bolstering the effectiveness of these areas can be especially troublesome due to clarity of patent applications and USPTO’s policies and search tools.

Megan Feil, January 3, 2017

Norwegian Investigators Bust Child Pornography Racket over Dark Web

January 3, 2017

A yearlong investigation has busted a huge child pornography racket and resulted in a seizure of 150 Terabytes of pornographic material. Out of 51 accused, 20 so far have been arrested.

New Nationalist in a news piece titled – 150 Terabytes! Norway Busts Largest Dark Web, Child Porn Networks in History — US, UK Media Ignore Story says:

It’s one of the largest child sex abuse cases in history. A year-long special investigation called “Operation Darkroom” resulted in the seizure of 150 terabytes of data material in the form of photos, movies and chat logs containing atrocities against children as young as infancy, Norwegian police announced at a news conference in late November.

The investigation has opened a Pandora’s box of pedophiles. The accused list mostly comprises of educated individuals like politicians, lawyers, teachers, and a police officer too. Most accused are yet to be apprehended by the investigators.

Despite the bust happening in November followed by a press conference, US and UK based media has turned a blind eye towards this happening. The news report further states:

The Library of Congress holds about 600 terabytes of Web data. Its online archive grows at a rate of about 5 terabytes per month. Also note the horrifically sadistic nature of the material seized. And note that police are investigating the reach as worldwide, which means it involves a massive scale of evil filth. But nobody in the criminally compliant mainstream media thinks its newsworthy.

It might be possible that the world media was busy with US Presidential elections, thus its reporting was very low key. An interesting take away from this entire sad episode – the Dark Web is not a hideout of hackers, terrorists, drug dealers, and hitmen – seemingly upright citizens lurk on Dark Web too.

Vishal Ingole, January 3, 2017

HonkinNews: Third Google Legacy Video Now Available

January 3, 2017

Google: The Digital Gutenberg presents findings from Stephen E Arnold’s monograph about the Google system from 2007 to 2009. Topics covered in the video include how Google has become a digital version of the old Bell Telephone Yellow Pages.

Like the print Yellow Pages, changing the business model is very difficult. As a result, Google remains a one-trick pony riding advertising and saddled with an approach which depends on the fast eroding desktop search model. Google’s behavior — which some insist on calling monopolistic — is under attack by regulators in Europe. Can Google adapt?

Kenny Toth, January 3, 2017

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta