May 13, 2016
Did you know that Facebook combs your content for criminal intent? American Intelligence Report reveals, “Facebook Monitors Your Private Messages and Photos for Criminal Activity, Reports them to Police.” Naturally, software is the first entity to scan content, using keywords and key phrases to flag items for human follow-up. Of particular interest are “loose” relationships. Reporter Kristan T. Harris writes:
“Reuters’ interview with the security officer explains, Facebook’s software focuses on conversations between members who have a loose relationship on the social network. For example, if two users aren’t friends, only recently became friends, have no mutual friends, interact with each other very little, have a significant age difference, and/or are located far from each other, the tool pays particular attention.
“The scanning program looks for certain phrases found in previously obtained chat records from criminals, including sexual predators (because of the Reuters story, we know of at least one alleged child predator who is being brought before the courts as a direct result of Facebook’s chat scanning). The relationship analysis and phrase material have to add up before a Facebook employee actually looks at communications and makes the final decision of whether to ping the authorities.
“’We’ve never wanted to set up an environment where we have employees looking at private communications, so it’s really important that we use technology that has a very low false-positive rate,’ Sullivan told Reuters.”
Uh-huh. So, one alleged predator has been caught. We’re told potential murder suspects have also been identified this way, with one case awash in 62 pages of Facebook-based evidence. Justice is a good thing, but Harris notes that most people will be uncomfortable with the idea of Facebook monitoring their communications. She goes on to wonder where this will lead; will it eventually be applied to misdemeanors and even, perhaps, to “thought crimes”?
Users of any social media platform must understand that anything they post could eventually be seen by anyone. Privacy policies can be updated without notice, and changes can apply to old as well as new data. And, of course, hackers are always lurking about. I was once cautioned to imagine that anything I post online I might as well be shouting on a public street; that advice has served me well.
Cynthia Murrell, May 13, 2016
May 11, 2016
Strife has plagued the human race since the beginning, but the Pentagon’s research arm thinks may be able to get to the root of the problem. Defense Systems informs us, “DARPA Looks to Tap Social Media, Big Data to Probe the Causes of Social Unrest.” Writer George Leopold explains:
“The Defense Advanced Research Projects Agency (DARPA) announced this week it is launching a social science research effort designed to probe what unifies individuals and what causes communities to break down into ‘a chaotic mix of disconnected individuals.’ The Next Generation Social Science (NGS2) program will seek to harness steadily advancing digital connections and emerging social and data science tools to identify ‘the primary drivers of social cooperation, instability and resilience.’
“Adam Russell, DARPA’s NGS2 program manager, said the effort also would address current research limitations such as the technical and logistical hurdles faced when studying large populations and ever-larger datasets. The project seeks to build on the ability to link thousands of diverse volunteers online in order to tackle social science problems with implications for U.S. national and economic security.”
The initiative aims to blend social science research with the hard sciences, including computer and data science. Virtual reality, Web-based gaming, and other large platforms will come into play. Researchers hope their findings will make it easier to study large and diverse populations. Funds from NGS2 will be used for the project, with emphases on predictive modeling, experimental structures, and boosting interpretation and reproducibility of results.
Will it be the Pentagon that finally finds the secret to world peace?
Cynthia Murrell, May 11, 2016
May 10, 2016
According to MIT Technology Review, it has finally happened. No longer is artificial intelligence the purview of data wonks alone— “AI Hits the Mainstream,” they declare. Targeted AI software is now being created for fields from insurance to manufacturing to health care. Reporter Nanette Byrnes is curious to see how commercialization will affect artificial intelligence, as well as how this technology will change different industries.
What about the current state of the AI field? Byrnes writes:
“Today the industry selling AI software and services remains a small one. Dave Schubmehl, research director at IDC, calculates that sales for all companies selling cognitive software platforms —excluding companies like Google and Facebook, which do research for their own use—added up to $1 billion last year. He predicts that by 2020 that number will exceed $10 billion. Other than a few large players like IBM and Palantir Technologies, AI remains a market of startups: 2,600 companies, by Bloomberg’s count. That’s because despite rapid progress in the technologies collectively known as artificial intelligence—pattern recognition, natural language processing, image recognition, and hypothesis generation, among others—there still remains a long way to go.”
The article examines ways some companies are already using artificial intelligence. For example, insurance and financial firm USAA is investigating its use to prevent identity theft, while GE is now using it to detect damage to its airplanes’ engine blades. Byrnes also points to MyFitnessPal, Under Armor’s extremely successful diet and exercise tracking app. Through a deal with IBM, Under Armor is blending data from that site with outside research to help better target potential consumers.
The article wraps up by reassuring us that, despite science fiction assertions to the contrary, machine learning will always require human guidance. If you doubt, consider recent events—Google’s self-driving car’s errant lane change and Microsoft’s racist chatbot. It is clear the kids still need us, at least for now.
Cynthia Murrell, April 10, 2016
May 9, 2016
The Oxford University Press’s blog discusses law enforcement’s interest in the shady side of the Internet in its post, “Infiltrating the Dark Web.” Writer Andrew Staniforth observes that the growth of crime on the Dark Web calls for new tactics. He writes:
“Criminals conducting online abuses, thefts, frauds, and terrorism have already shown their capacity to defeat Information Communication Technology (ICT) security measures, as well as displaying an indifference to national or international laws designed to stop them. The uncomfortable truth is that as long as online criminal activities remain profitable, the miscreants will continue, and as long as technology advances, the plotters and conspirators who frequent the Dark Web will continue to evolve at a pace beyond the reach of traditional law enforcement methods.
“There is, however, some glimmer of light amongst the dark projection of cybercrime as a new generation of cyber-cops are fighting back. Nowhere is this more apparent than the newly created Joint Cybercrime Action Taskforce (J-CAT) within Europol, who now provide a dynamic response to strengthen the fight against cybercrime within the European Union and beyond Member States borders. J-CAT seeks to stimulate and facilitate the joint identification, prioritisation, and initiation of cross-border investigations against key cybercrime threats and targets – fulfilling its mission to pro-actively drive intelligence-led actions against those online users with criminal intentions.”
The article holds up J-CAT as a model for fighting cybercrime. It also emphasizes the importance of allocating resources for gathering intelligence, and notes that agencies are increasingly focused on solutions that can operate in mobile and cloud environments. Increased collaboration, however, may make the biggest difference in the fight against criminals operating on the Dark Web.
Cynthia Murrell, April 9, 2016
May 8, 2016
We’ve run across an interesting list of companies at Let’s Talk Payments, “Europe’s Elite Cybersecurity Club.” The bare-bones roster names and links to 28 cybersecurity companies, with a brief description of each. See the original for the descriptions, but here are their entries:
SpamTitan, Gemalto, Avira, itWatch, BT, Sophos, DFLabs, ImmuniWeb, Silent Circle, Deep-Secure, SentryBay , AVG Technologies, Clearswift, ESNC, DriveLock, BitDefender, neXus, Thales, Cryptovision, Secunia, Osirium, Qosmos, Digital Shadows, F-Secure, Smoothwall, Brainloop, TrulyProtect, and Enorasys Security Analytics
It is a fine list as far as it goes, but we notice it is not exactly complete. For example, where is FinFisher’s parent company, Gamma International? Still, the list is a concise and valuable source for anyone interested in learning more about these companies.
Cynthia Murrell, May 8, 2016
May 7, 2016
Ever wonder how hackers fill job openings, search-related or otherwise? A discussion at the forum tehPARADOX.COM considers, “How Hackers Recruit New Talent.” Poster MorningLightMountain cites a recent study by cybersecurity firm Digital Shadows, which reportedly examined around 100 million websites, both on the surface web and on the dark web, for recruiting practices. We learn:
“The researchers found that the process hackers use to recruit new hires mirrors the one most job-seekers are used to. (The interview, for example, isn’t gone—it just might involve some anonymizing technology.) Just like in any other industry, hackers looking for fresh talent start by exploring their network, says Rick Holland, the vice president of strategy at Digital Shadows. ‘Reputation is really, really key,’ Holland says, so a candidate who comes highly recommended from a trusted peer is off to a great start. When hiring criminals, reputation isn’t just about who gets the job done best: There’s an omnipresent danger that the particularly eager candidate on the other end of the line is actually an undercover FBI agent. A few well-placed references can help allay those fears.”
Recruiters, we’re told, frequently advertise on hacker forums. These groups reach many potential recruits and are often password-protected. However, it is pretty easy to trace anyone who logs into one without bothering to anonymize their traffic. Another option is to advertise on the dark web— researchers say they even found a “sort of Monster.com for cybercrime” there.
The post goes on to discuss job requirements, interviews, and probationary periods. We’re reminded that, no matter how many advanced cybersecurity tools get pushed to market, most attack are pretty basic; they involve approaches like denial-of-service and SQL injection. So, MorningLightMountain advises, any job-seeking hackers should be good to go if they just keep up those skills.
Cynthia Murrell, May 7, 2016
May 6, 2016
A martial artist once told me that an individual’s fighting style, if defined enough, was like a set of fingerprints. The same can be said for painting style, book preferences, and even Netflix selections, but what about something as anonymous as a computer mouse’s movement? Here is a new scary thought from PC & Tech Authority: “Researcher Can Indentify Tor Users By Their Mouse Movements.”
It seems far-fetched, especially when one considers how random this data is, but
This is the age of big data, but looking Norte’s claim from a logical standpoint one needs to consider that not all computer mice are made the same, some use lasers, others prefer trackballs, and what about a laptop’s track pad? As diverse as computer users are, there are similarities within the population and random mouse movement is not individualistic enough to ID a person. Fear not Tor users, move and click away in peace.
May 5, 2016
The article on MotherBoard titled Why the US Is Buying Up So Many UK Artificial Intelligence Companies surveys the rising tech community in the UK. There is some concern about the recent trend in UK AI and machine learning startups being acquired by US giants (HP and Autonomy, Google and DeepMind, Microsoft and Swiftkey, and Apple and VocalIQ.) It makes sense in terms of the necessary investments and platforms needed to support cutting-edge AI which are not available in the UK, yet. The article explains,
“And as AI increasingly becomes core to many tech products, experts become a limited resource. “All of the big US companies are working on the subject and then looking at opportunities everywhere—“…
Many of the snapped-up UK firms are the fruits of research at Britain’s top universities—add to the list above Evi Technologies (Amazon), Dark Blue Labs (Google), Vision Factory (also Google) that are either directly spun out of Cambridge, Oxford, or University College London…”
The results of this may be more positive for the UK tech industry than it appears at first glance. There are some companies, like DeepMind, that demand to stay in the UK, and there are other industry players who will return to the UK to launch their own ventures after spending years absorbing and contributing to the most current technologies and advancements.
Chelsea Kerwin, May 5, 2016
May 2, 2016
The article on The Verge titled The Most Dangerous Writing App Lets You Delete All of Your Work For Free speculates on the difficulties and hubris of charging money for technology that someone can clone and offer for free. Manuel Ebert’s The Most Dangerous Writing App offers a self-detonating notebook that you trigger if you stop typing. The article explains,
“Ebert’s service appears to be a repackaging of Flowstate, a $15 Mac app released back in January that functions in a nearly identical way. He even calls it The Most Dangerous Writing App, which is a direct reference to the words displayed on Flowstate creator Overman’s website. The difference: Ebert’s app is free, which could help it take off among the admittedly niche community of writers looking for self-deleting online notebooks.”
One such community that comes to mind is that of the creative writers. Many writers, and poets in particular, rely on exercises akin to the philosophy of The Most Dangerous Writing App: don’t let your pen leave the page, even if you are just writing nonsense. Adding higher stakes to the process might be an interesting twist, especially for those writers who believe that just as the nonsense begins, truth and significance are unlocked.
Chelsea Kerwin, May 2, 2016
April 29, 2016
It looks like another company is entering the arena of consumer cybersecurity. An article from Life Hacker, Privacy Lets You Create “Virtual” Credit Card Numbers, Deactivate One Instantly If It’s Stolen, shares the details of Privacy. Their tool generates disposable card numbers online, which can be tied to accounts with participating banks or Visa cards, and then allows users to easily deactivate if one is stolen. The service is free to users because Privacy makes money acting as a credit card processor. The article tells us,
“Privacy just gives you the ability to create virtual “accounts” that are authorized to charge a given amount to your account. You can set that account to be single use or multi-use, and if the amount is used up, then the transaction doesn’t go through to your main account. If one of your virtual accounts gets hit with an account you don’t recognize, you’ll be able to open the account from the Privacy Chrome or Firefox extension and shut it down immediately. The Chrome extension lets you manage your account quickly, auto-fill shopping sites with your virtual account numbers, or quickly create or shut down numbers.”
We think the concept of Privacy and the existence of such a service points to the perception consumers find security measures increasingly important. However, why trust Privacy? We’re not testing this idea, but perhaps Privacy is suited for Dark Web activity.
Megan Feil, April 29, 2016