U.S. Government Keeping Fewer New Secrets

February 24, 2017

We have good news and bad news for fans of government transparency. In their Secrecy News blog, the Federation of American Scientists’ reports, “Number of New Secrets in 2015 Near Historic Low.” Writer Steven Aftergood explains:

The production of new national security secrets dropped precipitously in the last five years and remained at historically low levels last year, according to a new annual report released today by the Information Security Oversight Office.

There were 53,425 new secrets (‘original classification decisions’) created by executive branch agencies in FY 2015. Though this represents a 14% increase from the all-time low achieved in FY 2014, it is still the second lowest number of original classification actions ever reported. Ten years earlier (2005), by contrast, there were more than 258,000 new secrets.

The new data appear to confirm that the national security classification system is undergoing a slow-motion process of transformation, involving continuing incremental reductions in classification activity and gradually increased disclosure. …

Meanwhile, ‘derivative classification activity,’ or the incorporation of existing secrets into new forms or products, dropped by 32%. The number of pages declassified increased by 30% over the year before.

A marked decrease in government secrecy—that’s the good news. On the other hand, the report reveals some troubling findings. For one thing, costs are not going down alongside classifications; in fact, they rose by eight percent last year. Also, response times to mandatory declassification requests (MDRs) are growing, leaving over 14,000 such requests to languish for over a year each. Finally, fewer newly classified documents carry the “declassify in ten years or less” specification, which means fewer items will become declassified automatically down the line.

Such red-tape tangles notwithstanding, the reduction in secret classifications does look like a sign that the government is moving toward more transparency. Can we trust the trajectory?

Cynthia Murrell, February 24, 2017

Search Engine Swaps User Faces into Results

February 22, 2017

Oh, the wonders of modern technology. Now, TechCrunch informs us, “This Amazing Search Engine Automatically Face Swaps You Into Your Image Results.” Searching may never be the same. Writer Devin Coldewey introduces us to Dreambit, a search engine that automatically swaps your face into select image-search results. The write-up includes some screenshots, and the results can be a bit surreal.

The system analyzes the picture of your face and determines how to intelligently crop it to leave nothing but your face. It then searches for images matching your search term — curly hair, for example — and looks for ‘doppelganger sets, images where the subject’s face is in a similar position to your own.

A similar process is done on the target images to mask out the faces and intelligently put your own in their place — and voila! You with curly hair, again and again and again. […]

It’s not limited to hairstyles, either: put yourself in a movie, a location, a painting — as long as there’s a similarly positioned face to swap yours with, the software can do it. A few facial features, like beards, make the edges of the face difficult to find, however, so you may not be able to swap with Rasputin or Gandalf.

Behind the nifty technology is the University of Washington’s Ira Kemelmacher-Shlizerman, a researcher in computer vision, facial recognition, and augmented reality. Her work could have more sober applications, too, like automated age-progressions to help with missing-person cases.  Though the software is still in beta, it is easy to foresee a wide array of uses ahead. Now, more than ever, don’t believe everything you see.

Cynthia Murrell, February 22, 2017

Gender Bias in Voice Recognition Software

February 21, 2017

A recent study seems to confirm what some have suspected: “Research Shows Gender Bias in Google’s Voice Recognition,” reports the Daily Dot. Not that this is anything new. Writer Selena Larson reminds us that voice recognition tech has a history of understanding men better than women, from a medical tracking system to voice-operated cars.  She cites a recent study by linguist researcher Rachael Tatman, who found that YouTube’s auto captions performed better on male voices than female ones by about 13 percent—no small discrepancy. (YouTube is owned by Google.)

Though no one is accusing the tech industry of purposely rendering female voices less effective, developers probably could have avoided this problem with some forethought. The article explains:

’Language varies in systematic ways depending on how you’re talking,’ Tatman said in an interview. Differences could be based on gender, dialect, and other geographic and physical attributes that factor into how our voices sound. To train speech recognition software, developers use large datasets, either recorded on their own, or provided by other linguistic researchers. And sometimes, these datasets don’t include diverse speakers.

Tatman recommends a purposeful and organized approach to remedying the situation. Larson continues:

Tatman said the best first step to address issues in voice tech bias would be to build training sets that are stratified. Equal numbers of genders, different races, socioeconomic statuses, and dialects should be included, she said.

Automated technology is developed by humans, so our human biases can seep into the software and tools we are creating to supposedly to make lives easier. But when systems fail to account for human bias, the results can be unfair and potentially harmful to groups underrepresented in the field in which these systems are built.

Indeed, that’s the way bias works most of the time—it is more often the result of neglect than of malice. To avoid it requires realizing there may be a problem in the first place, and working to avoid it from the outset. I wonder what other technologies could benefit from that understanding.

Cynthia Murrell, February 21, 2017

Upgraded Social Media Monitoring

February 20, 2017

Analytics are catching up to content. In a recent ZDNet article, Digimind partners with Ditto to add image recognition to social media monitoring, we are reminded images reign supreme on social media. Between Pinterest, Snapchat and Instagram, messages are often conveyed through images as opposed to text. Capitalizing on this, and intelligence software company Digimind has announced a partnership with Ditto Labs to introduce image-recognition technology into their social media monitoring software called Digimind Social. We learned,

The Ditto integration lets brands identify the use of their logos across Twitter no matter the item or context. The detected images are then collected and processed on Digimind Social in the same way textual references, articles, or social media postings are analysed. Logos that are small, obscured, upside down, or in cluttered image montages are recognised. Object and scene recognition means that brands can position their products exactly where there customers are using them. Sentiment is measured by the amount of people in the image and counts how many of them are smiling. It even identifies objects such as bags, cars, car logos, or shoes.

It was only a matter of time before these types of features emerged in social media monitoring. For years now, images have been shown to increase engagement even on platforms that began focused more on text. Will we see more watermarked logos on images? More creative ways to visually identify brands? Both are likely and we will be watching to see what transpires.

Megan Feil, February 20, 2017

 

The Current State of Enterprise Search, by the Numbers

February 17, 2017

The article and delightful Infographic on BA Insight titled Stats Show Enterprise Search is Still a Challenge builds an interesting picture of the present challenges and opportunities surrounding enterprise search, or at least alludes to them with the numbers offered. The article states,

As referenced by AIIM in an Industry Watch whitepaper on search and discovery, three out of four people agree that information is easier to find outside of their organizations than within. That is startling! With a more effective enterprise search implementation, these users feel that better decision-making and faster customer service are some of the top benefits that could be immediately realized.

What follows is a collection of random statistics about enterprise search. We would like to highlight one stat in particular: 58% of those investing in enterprise search get no payback after one year. In spite of the clear need for improvements, it is difficult to argue for a technology that is so long-term in its ROI, and so shaky where it is in place. However, there is a massive impact on efficiency when employees waste time looking for the information they need to do their jobs. In sum: you can’t live with it, and you can’t live (productively) without it.

Chelsea Kerwin, February 17, 2017

Enterprise Heads in the Sand on Data Loss Prevention

February 16, 2017

Enterprises could be doing so much more to protect themselves from cyber attacks, asserts Auriga Technical Manager James Parry in his piece, “The Dark Side: Mining the Dark Web for Cyber Intelligence” at Information Security Buzz. Parry informs us that most businesses fail to do even the bare minimum they should to protect against hackers. This minimum, as he sees it, includes monitoring social media and underground chat forums for chatter about their company. After all, hackers are not known for their modesty, and many do boast about their exploits in the relative open. Most companies just aren’t bothering to look that direction. Such an effort can also reveal those impersonating a business by co-opting its slogans and trademarks.

Companies who wish to go beyond the bare minimum will need to expand their monitoring to the dark web (and expand their data-processing capacity). From “shady” social media to black markets to hacker libraries, the dark web can reveal much about compromised data to those who know how to look. Parry writes:

Yet extrapolating this information into a meaningful form that can be used for threat intelligence is no mean feat. The complexity of accessing the dark web combined with the sheer amount of data involved, correlation of events, and interpretation of patterns is an enormous undertaking, particularly when you then consider that time is the determining factor here. Processing needs to be done fast and in real-time. Algorithms also need to be used which are able to identify and flag threats and vulnerabilities. Therefore, automated event collection and interrogation is required and for that you need the services of a Security Operations Centre (SOC).

The next generation SOC is able to perform this type of processing and detect patterns, from disparate data sources, real-time, historical data etc. These events can then be threat assessed and interpreted by security analysts to determine the level of risk posed to the enterprise. Forewarned, the enterprise can then align resources to reduce the impact of the attack. For instance, in the event of an emerging DoS attack, protection mechanisms can be switched from monitoring to mitigation mode and network capacity adjusted to weather the attack.

Note that Parry’s company, Auriga, supplies a variety of software and R&D services, including a Security Operations Center platform, so he might be a tad biased. Still, he has some good points. The article notes SOC insights can also be used to predict future attacks and to prioritize security spending. Typically, SOC users have been big businesses, but, Parry advocates, scalable and entry-level packages are making such tools available to smaller companies.

From monitoring mainstream social media to setting up an SOC to comb through dark web data, tools exist to combat hackers. The question, Parry observes, is whether companies will face the growing need to embrace those methods.

Cynthia Murrell, February 16, 2017

Why Do We Care More About Smaller Concerns? How Quantitative Numbing Impacts Emotional Response

February 14, 2017

The affecting article on Visual Business Intelligence titled When More is Less: Quantitative Numbing explains the phenomenon that many of us have probably witnessed on the news, in our friends and family, and even personally experienced in ourselves. A local news story about the death of an individual might provoke a stronger emotional response than news of a mass tragedy involving hundreds or thousands of deaths. Scott Slovic and Paul Slovic explore this in their book Numbers and Nerves. According to the article, this response is “built into our brains.” Another example explains the Donald Trump effect,

Because he exhibits so many examples of bad behavior, those behaviors are having relatively little impact on us. The sheer number of incidents creates a numbing effect. Any one of Trump’s greedy, racist, sexist, vulgar, discriminatory, anti-intellectual, and dishonest acts, if considered alone, would concern us more than the huge number of examples that now confront us. The larger the number, the lesser the impact…This tendency… is automatic, immediate, and unconscious.

The article suggests that the only reason to overcome this tendency is to engage with large quantities in a slower, more thoughtful way. An Abel Hertzberg quote helps convey this approach when considering the large-scale tragedy of the Holocaust: “There were not six million Jews murdered: there was one murder, six million times.” The difference between that consideration of individual murders vs. the total number is stark, and it needs to enter into the way we process daily events that are happening all over the world if we want to hold on to any semblance of compassion and humanity.

Chelsea Kerwin, February 14, 2017

Data Mining Firm Cambridge Analytica Set to Capture Trump White House Communications Contract and Trump Organization Sales Contract

February 13, 2017

The article titled Data Firm in Talks for Role in White House Messaging — And Trump Business on The Guardian discusses the future role of Cambridge Analytica in both White House communication and the Trump Organization as well. Cambridge Analytica is a data company based out of London that boasts crucial marketing and psychological data on roughly 230 million Americans. The article points out,

Cambridge’s data could be helpful in both “driving sales and driving policy goals”, said the digital source, adding: “Cambridge is positioned to be the preferred vendor for all of that.”… The potential windfall for the company comes after the Mercers and Cambridge played key roles in Trump’s victory. Cambridge Analytica was tapped as a leading campaign data vendor as the Mercers… The Mercers reportedly pushed for the addition of a few top campaign aides, including Bannon and Kellyanne Conway, who became campaign manager.

Robert Mercer is a major investor in Cambridge Analytica as well as Breitbart News, Steve Bannon’s alt-right news organization. Steve Bannon is also on the board of Cambridge Analytica. The entanglements mount. Prior to potentially snagging these two wildly conflicting contracts, Cambridge Analytica helped Trump win the presidency with their data modeling and psychological profiling that focuses on building intimate relationships between brands and consumers to drive action.

Chelsea Kerwin, February 13, 2017

The Game-Changing Power of Visualization

February 8, 2017

Data visualization may be hitting at just the right time. Data Floq shared an article highlighting the latest, Data Visualisation Can Change How We Think About The World. As the article mentions, we are primed for it biologically: the human eye and brain processes 10 to 12 separate images per second, comfortably. Considering the output, visualization provides the ability to rapidly incorporate new data sets, remove metadata and increase performance. Data visualization is not without challenge. The article explains,

Perhaps the biggest challenge for data visualisation is understanding how to abstract and represent abstraction without compromising one of the two in the process. This challenge is deep rooted in the inherent simplicity of descriptive visual tools, which significantly clashes with the inherent complexity that defines predictive analytics. For the moment, this is a major issue in communicating data; The Chartered Management Institute found that 86% of 2,000 financiers surveyed late 2013, were still struggling to turn volumes of data into valuable insights. There is a need, for people to understand what led to the visualisation, each stage of the process that led to its design. But, as we increasingly adopt more and more data this is becoming increasingly difficult.

Is data visualization changing how we think about the world, or is the existence of big data the culprit? We would argue data visualization is simply a tool to present data; it is a product rather than an impetus for a paradigm shift. This piece is right, however in bringing attention to the conflict between detail and accessibility of information. We can’t help but think the meaning is likely in the balancing of both.

Megan Feil, February 8, 2017

How to Quantify Culture? Counting the Bookstores and Libraries Is a Start

February 7, 2017

The article titled The Best Cities in the World for Book Lovers on Quartz conveys the data collected by the World Cities Culture Forum. That organization works to facilitate research and promote cultural endeavors around the world. And what could be a better measure of a city’s culture than its books? The article explains how the data collection works,

Led by the London mayor’s office and organized by UK consulting company Bop, the forum asks its partner cities to self-report on cultural institutions and consumption, including where people can get books. Over the past two years, 18 cities have reported how many bookstores they have, and 20 have reported on their public libraries. Hong Kong leads the pack with 21 bookshops per 100,000 people, though last time Buenos Aires sent in its count, in 2013, it was the leader, with 25.

New York sits comfortably in sixth place, but London, surprisingly, is near the bottom of the ranking with roughly 360 bookstores. Another measure the WCCF uses is libraries per capita. Edinburgh of all places surges to the top without any competition. New York is the only US city to even make the cut with an embarrassing 2.5 libraries per 100K people. By contrast, Edinburgh has 60.5 per 100K people. What this analysis misses out on is the size and beauty of some of the bookstores and libraries of global cities. To bask in these images, visit Bookshelf Porn or this Mental Floss ranking of the top 7 gorgeous bookstores.

Chelsea Kerwin, February 7, 2017

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta