Anonymous Transparency Project Boldly Attacks Google for Secrecy Then Dives Back Under Rug

February 23, 2017

The article on Mercury News titled Secretive Foe Attacks Google Over Government Influence reports on the Transparency Project, an ironically super-secret group devoted to exposing Google’s insane level of influence. Of course, most of us are already perfectly aware of how much power Google holds over our politicians, our privacy, and our daily functions. Across Chrome, Google search, YouTube etc., not a day goes by that we don’t engage with the Silicon Valley Monster. The group claims,

Over the past decade, Google has transformed itself from the dominant internet search engine into a global business empire that touches on almost every facet of people’s lives — often without their knowledge or consent,” the group’s first report said. Another report, based on White House guest logs, cites 427 visits by employees of Google and “associated entities” to the White House since January 2009, with 21 “small, intimate” meetings between senior Google executives and Obama.

While such information may be disturbing, it is hardly revelatory.  So just who is behind the Transparency Project? The article provides a list of companies that Google has pissed off and stomped over on its path to glory. The only company that has stepped up to claim some funding is Oracle. But following the money in this case winds a strange twisted path that actually leads the author back to Google— or at least former Google CEO Eric Schmidt. This begs the question: is there anything Google isn’t influencing?

Chelsea Kerwin, February 23, 2017

Gender Bias in Voice Recognition Software

February 21, 2017

A recent study seems to confirm what some have suspected: “Research Shows Gender Bias in Google’s Voice Recognition,” reports the Daily Dot. Not that this is anything new. Writer Selena Larson reminds us that voice recognition tech has a history of understanding men better than women, from a medical tracking system to voice-operated cars.  She cites a recent study by linguist researcher Rachael Tatman, who found that YouTube’s auto captions performed better on male voices than female ones by about 13 percent—no small discrepancy. (YouTube is owned by Google.)

Though no one is accusing the tech industry of purposely rendering female voices less effective, developers probably could have avoided this problem with some forethought. The article explains:

’Language varies in systematic ways depending on how you’re talking,’ Tatman said in an interview. Differences could be based on gender, dialect, and other geographic and physical attributes that factor into how our voices sound. To train speech recognition software, developers use large datasets, either recorded on their own, or provided by other linguistic researchers. And sometimes, these datasets don’t include diverse speakers.

Tatman recommends a purposeful and organized approach to remedying the situation. Larson continues:

Tatman said the best first step to address issues in voice tech bias would be to build training sets that are stratified. Equal numbers of genders, different races, socioeconomic statuses, and dialects should be included, she said.

Automated technology is developed by humans, so our human biases can seep into the software and tools we are creating to supposedly to make lives easier. But when systems fail to account for human bias, the results can be unfair and potentially harmful to groups underrepresented in the field in which these systems are built.

Indeed, that’s the way bias works most of the time—it is more often the result of neglect than of malice. To avoid it requires realizing there may be a problem in the first place, and working to avoid it from the outset. I wonder what other technologies could benefit from that understanding.

Cynthia Murrell, February 21, 2017

The Current State of Enterprise Search, by the Numbers

February 17, 2017

The article and delightful Infographic on BA Insight titled Stats Show Enterprise Search is Still a Challenge builds an interesting picture of the present challenges and opportunities surrounding enterprise search, or at least alludes to them with the numbers offered. The article states,

As referenced by AIIM in an Industry Watch whitepaper on search and discovery, three out of four people agree that information is easier to find outside of their organizations than within. That is startling! With a more effective enterprise search implementation, these users feel that better decision-making and faster customer service are some of the top benefits that could be immediately realized.

What follows is a collection of random statistics about enterprise search. We would like to highlight one stat in particular: 58% of those investing in enterprise search get no payback after one year. In spite of the clear need for improvements, it is difficult to argue for a technology that is so long-term in its ROI, and so shaky where it is in place. However, there is a massive impact on efficiency when employees waste time looking for the information they need to do their jobs. In sum: you can’t live with it, and you can’t live (productively) without it.

Chelsea Kerwin, February 17, 2017

Investment Group Acquires Lexmark

February 15, 2017

We read with some trepidation the Kansas City Business Journal’s article, “Former Perceptive’s Parent Gets Acquired for $3.6B in Cash.”  The parent company referred to here is Lexmark, which bought up one of our favorite search systems, ISYS Search, in 2012 and placed it under its Perceptive subsidiary, based in Lenexa, Kentucky. We do hope this valuable tool is not lost in the shuffle.

Reporter Dora Grote specifies:

A few months after announcing that it was exploring ‘strategic alternatives,’ Lexmark International Inc. has agreed to be acquired by a consortium of investors led by Apex Technology Co. Ltd. and PAG Asia Capital for $3.6 billion cash, or $40.50 a share. Legend Capital Management Co. Ltd. is also a member of the consortium.

Lexmark Enterprise Software in Lenexa, formerly known as Perceptive Software, is expected to ‘continue unaffected and benefit strategically and financially from the transaction’ the company wrote in a release. The Lenexa operation — which makes enterprise content management software that helps digitize paper records — dropped the Perceptive Software name for the parent’s brand in 2014. Lexmark, which acquired Perceptive for $280 million in cash in 2010, is a $3.7 billion global technology company.

If the Lexmark Enterprise Software (formerly known as Perceptive) division will be unaffected, it seems they will be the lucky ones. Grote notes that Lexmark has announced that more than a thousand jobs are to be cut amid restructuring. She also observes that the company’s buildings in Lenexa have considerable space up for rent. Lexmark CEO Paul Rooke is expected to keep his job, and headquarters should remain in Lexington, Kentucky.

Cynthia Murrell, February 15, 2017

Oracle Pays Big Premium for NetSuite and Larry Ellison Benefits

February 6, 2017

The article on Reuters titled Oracle-NetSuite Deal May Be Sweetest for Ellison emphasizes the perks of being an executive chairman like Larry Ellison, of Oracle. Ellison ranks as the third richest person in America and fifth in the world. The article suggests that his fortune of over $50B is often considered as mingling with Oracle’s $160B in a way that makes, if no one else, at least Reuters, very uncomfortable. The article does offer some context to the most recent acquisition of NetSuite, for which Oracle paid a 44% premium on a company of which Ellison owns a 45% stake.

NetSuite was founded by an ex-Oracle employee, bankrolled by Ellison. While Oracle concentrated on selling enterprise software to giant corporations, the upstart focused on servicing small and medium-sized companies using the cloud. The two companies’ businesses have increasingly overlapped as larger customers have become comfortable using web-based software.

As a result, it makes strategic sense to combine the two firms. And the process seems to have been handled right, with a committee of independent Oracle directors calling the shots.

The article also points out that such high surcharges aren’t all that unusual. Salesforce.com recently paid a 56% premium for Demandware. But in this case, things are complicated by Ellison’s potential conflict of interest. If Oracle had done more to invest in cloud business or NetSuite earlier, say four or five years ago, they would not find themselves forking over just under $10B now.

Chelsea Kerwin, February 6, 2017

Fight Fake News with Science

February 1, 2017

With all the recent chatter around “fake news,” one researcher has decided to approach the problem scientifically. An article at Fortune reveals “What a Map of the Fake-News Ecosystem Says About the Problem.” Writer Mathew Ingram introduces us to data-journalism expert and professor Jonathan Albright, of Elon University, who has mapped the fake-news ecosystem. Facebook and Google are just unwitting distributors of faux facts; Albright wanted to examine the network of sites putting this stuff out there in the first place. See the article for a description of his methodology; Ingram summarizes the results:

More than anything, the impression one gets from looking at Albright’s network map is that there are some extremely powerful ‘nodes’ or hubs, that propel a lot of the traffic involving fake news. And it also shows an entire universe of sites that many people have probably never heard of. Two of the largest hubs Albright found were a site called Conservapedia—a kind of Wikipedia for the right wing—and another called Rense, both of which got huge amounts of incoming traffic. Other prominent destinations were sites like Breitbart News, DailyCaller and YouTube (the latter possibly as an attempt to monetize their traffic).

Albright said he specifically stayed away from trying to determine what or who is behind the rise of fake news. … He just wanted to try and get a handle on the scope of the problem, as well as a sense of how the various fake-news distribution or creation sites are inter-connected. Albright also wanted to do so with publicly-available data and open-source tools so others could build on it.

Albright also pointed out the folly of speculating on sources of fake news; such guesswork only “adds to the existing noise,” he noted. (Let’s hear it for common sense!) Ingram points out that, armed with Albright’s research, Google, Facebook, and other outlets may be better able to combat the problem.

Cynthia Murrell, February 1, 2017

Rise of Fake News Should Have All of Us Questioning Our Realities

January 31, 2017

The article on NBC titled Five Tips on How to Spot Fake News Online reinforces the catastrophic effects of “fake news,” or news that flat-out delivers false and misleading information. It is important to separate “fake news” from ideologically-slanted news sources and the mess of other issues dragging any semblance of journalistic integrity through the mud, but the article focuses on a key point. The absolute best practice is to take in a variety of news sources. Of course, when it comes to honest-to-goodness “fake news,” we would all be better off never reading it in the first place. The article states,

A growing number of websites are espousing misinformation or flat-out lies, raising concerns that falsehoods are going viral over social media without any mechanism to separate fact from fiction. And there is a legitimate fear that some readers can’t tell the difference. A study released by Stanford University found that 82 percent of middle schoolers couldn’t spot authentic news sources from ads labeled as “sponsored content.” The disconnect between true and false has been a boon for companies trying to turn a quick profit.

So how do we separate fact from fiction? Checking the web address and avoiding .lo and .co.com addresses, researching the author, differentiating between blogging and journalism, and again, relying on a variety of sources such as print, TV, and digital. In a time when even the President-to-be, a man with the best intelligence in the world at his fingerprints, chooses to spread fake news (aka nonsense) via Twitter that he won the popular vote (he did not) we all need to step up and examine the information we consume and allow to shape our worldview.

Chelsea Kerwin, January 31, 2017

Declassified CIA Data Makes History Fun

January 26, 2017

One thing I have always heard to make kids more interested in learning about the past is “making it come alive.”  Textbooks suck at “making anything come alive” other than naps.  What really makes history a reality and more interesting are documentaries, eyewitnesses, and actual artifacts.  The CIA has a wealth of history and History Tech shares with us some rare finds: “Tip Of The Week: 8 Decades Of Super Cool Declassified CIA Maps.”  While the CIA Factbook is one of the best history and geography tools on the Web, the CIA Flickr account is chock full of declassified goodies, such as spy tools, maps, and more.

The article’s author shared that:

The best part of the Flickr account for me is the eight decades of CIA maps starting back in the 1940s prepared for the president and various government agencies. These are perfect for helping provide supplementary and corroborative materials for all sorts of historical thinking activities. You’ll find a wide variety of map types that could also easily work as stand-alone primary source.

These declassified maps were actually used by CIA personnel, political advisors, and presidents to make decisions that continue to impact our lives today.  The CIA flickr account is only one example of how the Internet is a wonderful tool for making history come to life.  Although you need to be cautious about where the information comes from since these are official CIA records they are primary sources.

Whitney Grace, January 26, 2017

Cybersecurity Technologies Fueled by Artificial Intelligence

December 28, 2016

With terms like virus being staples in the cybersecurity realm, it is no surprise the human immune system is the inspiration for the technology fueling one relatively new digital threat defense startup. In the Tech Republic article, Darktrace bolsters machine learning-based security tools to automatically attack threats, more details and context about Darktrace’s technology and positioning was revealed. Founded in 2013, Darktrace recently announced they raised $65 million to help fund their expansion globally. Four products, including their basic cyber threat defense solution called Darktrace, comprise their product suite. The article expands on their offerings:

Darktrace also offers its Darktrace Threat Visualizer, which provides analysts and CXOs with a high-level, global view of their enterprise. Darktrace Antigena complements the core Darktrace product by automatically defends against potential threats that have been detected, acting as digital “antibodies.” Finally, the Industrial Immune System is a version of Darktrace designed for Industrial Control Systems (ICS). The key value provided by Darktrace is the fact that it relies on unsupervised machine learning, and it is able to detect threats on its own without much human interaction.

We echo this article’s takeaway that machine learning and other artificial intelligence technologies continue to grow in the cybersecurity sector. The attention on AI is only building in this industry and others. Perhaps the lack of AI is particularly well-suited to cybersecurity as it’s behind-the-scenes nature that of Dark Web related crimes.

Megan Feil, December 28, 2016

UN Addresses Dark Web Drug Trade

December 16, 2016

Because individual nations are having spotty success fighting dark-web-based crime, the United Nations is stepping up. DeepDotWeb reports, “UN Trying to Find Methods to Stop the Dark Web Drug Trade.” The brief write-up cites the United Nation’s Office on Drugs and Crime’s (UNODC’s) latest annual report, which reveals new approaches to tackling drugs on the dark web. The article explains why law-enforcement agencies around the world have been having trouble fighting the hidden trade. Though part of the problem is technical, another is one of politics and jurisdiction. We learn:

Since most of the users use Tor and encryption technologies to remain hidden while accessing dark net marketplaces and forums, law enforcement authorities have trouble to identify and locate their IP addresses. …

Police often finds itself trapped within legal boundaries. The most common legal issues authorities are facing in these cases are which jurisdiction should they use, especially when the suspect’s location is unknown. There are problems regarding national sovereignties too. When agencies are hacking a dark net user’s account, they do not really know which country the malware will land to. For this reason, the UNODC sees a major issue when sharing intelligence when it’s not clear where in the world that intelligence would be best used.

The write-up notes that the FBI has been using tricks like hacking Dark Net users and tapping into DOD research. That agency is also calling for laws that would force suspects to decrypt their devices upon being charged. In the meantime, the UNODC supports the development of tools that will enhance each member state’s ability to “collect and exploit digital evidence.” To see the report itself, navigate here, where you will find an overview and a link to the PDF.

Cynthia Murrell, December 16, 2016

Next Page »