Google DeepMind Acquires Healthcare App

April 5, 2016

What will Google do next? Google’s London AI powerhouse has set up a new healthcare division and acquired a medical app called Hark, an article from Business Insider, tells us the latest. DeepMind, Google’s artificial intelligence research group, launched a new division recently called DeepMind Health and acquired a healthcare app. The article describes DeepMind Health’s new app called Hark,

“Hark — acquired by DeepMind for an undisclosed sum — is a clinical task management smartphone app that was created by Imperial College London academics Professor Ara Darzi and Dr Dominic King. Lord Darzi, director of the Institute of Global Health Innovation at Imperial College London, said in a statement: “It is incredibly exciting to have DeepMind – the world’s most exciting technology company and a true UK success story – working directly with NHS staff. The types of clinician-led technology collaborations that Mustafa Suleyman and DeepMind Health are supporting show enormous promise for patient care.”

The healthcare industry is ripe for disruptive technology, especially technologies which solve information and communications challenges. As the article alludes to, many issues in healthcare stem from too little conveyed and too late. Collaborations between researchers, medical professionals and tech gurus appears to be a promising answer. Will Google’s Hark lead the way?

 

Megan Feil, April 5, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Venture Dollars Point to Growing Demand for Cyber Security

April 4, 2016

A UK cyber security startup has caught our attention — along with that of venture capitalists. The article Digital Shadows Gets $14M To Keep Growing Its Digital Risk Scanning Service from Tech Crunch reports Digital Shadows received $14 million in Series B funding. This Software as a service (SaaS) is geared toward enterprises with more than 1,000 employees with a concern for monitoring risk and vulnerabilities by monitoring online activity related to the enterprise. The article describes Digital Shadows’ SearchLight which was initially launched in May 2014,

“Digital Shadows’ flagship product, SearchLight, is a continuous real-time scan of more than 100 million data sources online and on the deep and dark web — cross-referencing customer specific data with the monitored sources to flag up instances where data might have inadvertently been posted online, for instance, or where a data breach or other unwanted disclosure might be occurring. The service also monitors any threat-related chatter about the company, such as potential hackers discussing specific attack vectors. It calls the service it offers “cyber situational awareness”.”

Think oversight in regards to employees breaching sensitive data on the Dark Web, for example, a bank employee selling client data through Tor. How will this startup fare? Time will tell, but we will be watching them, along with other vendors offering similar services.

 

Megan Feil, April 4, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Netflix Algorithm Defaults To “White” Content, Sweeps Diversity Under the Rug

April 1, 2016

The article Marie Claire titled Blackflix; How Netflix’s Algorithm Exposes Technology’s Racial Bias, delves into the racial ramifications of Netflix’s much-lauded content recommendation algorithm. Many users may have had strange realizations about themselves or their preferences due to collisions with the system that the article calls “uncannily spot-on.” To sum it up: Netflix is really good at showing us what we want to watch, but only based on what we have already watched. When it comes to race, sexuality, even feminism (how many movies have I watched in the category “Movies With a Strong Female Lead?”), Netflix stays on course by only showing you similarly diverse films to what you have already selected. The article states,

“Or perhaps I could see the underlying problem, not in what we’re being shown, but in what we’re not being shown. I could see the fact that it’s not until you express specific interest in “black” content that you see how much of it Netflix has to offer. I could see the fact that to the new viewer, whose preferences aren’t yet logged and tracked by Netflix’s algorithm, “black” movies and shows are, for the most part, hidden from view.”

This sort of “default” suggests quite a lot about what Netflix has decided to put forward as normal or inoffensive content. To be fair, they do stress the importance of logging preferences from the initial sign up, but there is something annoying about the idea that there are people who can live in a bubble of straight, white, (or black and white) content. There are among those people some who might really enjoy and appreciate a powerful and relevant film like Fruitvale Station. If it wants to stay current, Netflix needs to show more appreciation or even awareness of its technical bias.

Chelsea Kerwin, April 1, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Third Party Company Profiteering

March 31, 2016

We might think that we keep our personal information from the NSA, but there are third party companies that legally tap ISP providers and phone companies and share the information with government agencies. ZDNet shares the inside story about this legal loophole, “Meet The Shadowy Tech Brokers That Deliver Your Data To The NSA.”  These third party companies hide under behind their neutral flag and then reap a profit.  You might have heard of some of them: Yaana, Subsentio, and Neustar.

“On a typical day, these trusted third-parties can handle anything from subpoenas to search warrants and court orders, demanding the transfer of a person’s data to law enforcement. They are also cleared to work with classified and highly secretive FISA warrants. A single FISA order can be wide enough to force a company to turn over its entire store of customer data.

Once the information passes through these third party companies it is nearly impossible to figure out how it is used.  The third party companies do conduct audits, but it does little to protect the average consumer.  Personal information is another commodity to buy, sell, and trade.  It deems little respect for the individual consumer.  Who is going to stand up for the little guy?  Other than Edward Snowden?

 

Whitney Grace, March 31, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Slack Hires Noah Weiss

March 29, 2016

One thing you can always count on the tech industry is talent will jump from company to company to pursue the best and most innovating endeavors.  The latest tech work to jump ship is Eric Weiss, he leaps from Foursquare to head a new Search, Learning, & Intelligence Group at Slack.  VentureBeat reports the story in “Slack Forms Search, Learning, & Intelligence Group On ‘Mining The Chat Corpus.’”  Slack is a team communication app and their new Search, Learning, & Intelligence Group will be located in the app’s new New York office.

Weiss commented on the endeavor:

“ ‘The focus is on building features that make Slack better the bigger a company is and the more it uses Slack,” Weiss wrote today in a Medium post. “The success of the group will be measured in how much more productive, informed, and collaborative Slack users get — whether a company has 10, 100, or 10,000 people.’”

For the new group, Weiss wants to hire experts who are talented in the fields of artificial intelligence, information retrieval, and natural language processing.  From this talent search, he might be working on a project that will help users to find specific information in Slack or perhaps they will work on mining the chap corpus.

Other tech companies have done the same.  Snapchat built a research team that uses artificial intelligence to analyze user content.  Flipboard and Pinterest are working on new image recognition technology.  Meanwhile Google, Facebook, Baidu, and Microsoft are working on their own artificial intelligence projects.

What will artificial intelligence develop into as more companies work on their secret projects.

 

Whitney Grace, March 29, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Reputable News Site Now on the Dark Web

March 28, 2016

Does the presence of a major news site lend an air of legitimacy to the Dark Web? Wired announces, “ProPublica Launches the Dark Web’s First Major News Site.” Reporter Andy Greenberg tells us that ProPublica recently introduced a version of their site running on the Tor network. To understand why anyone would need such a high level of privacy just to read the news, imagine living under a censorship-happy government; ProPublica was inspired to launch the site while working on a report about Chinese online censorship.

Why not just navigate to ProPublica’s site through Tor? Greenberg explains the danger of malicious exit nodes:

“Of course, any privacy-conscious user can achieve a very similar level of anonymity by simply visiting ProPublica’s regular site through their Tor Browser. But as Tigas points out, that approach does leave the reader open to the risk of a malicious ‘exit node,’ the computer in Tor’s network of volunteer proxies that makes the final connection to the destination site. If the anonymous user connects to a part of ProPublica that isn’t SSL-encrypted—most of the site runs SSL, but not yet every page—then the malicious relay could read what the user is viewing. Or even on SSL-encrypted pages, the exit node could simply see that the user was visiting ProPublica. When a Tor user visits ProPublica’s Tor hidden service, by contrast—and the hidden service can only be accessed when the visitor runs Tor—the traffic stays under the cloak of Tor’s anonymity all the way to ProPublica’s server.”

The article does acknowledge that Deep Dot Web has been serving up news on the Dark Web for some time now. However, some believe this move from a reputable publisher is a game changer. ProPublica developer Mike Tigas stated:

“Personally I hope other people see that there are uses for hidden services that aren’t just hosting illegal sites. Having good examples of sites like ProPublica and Securedrop using hidden services shows that these things aren’t just for criminals.”

Will law-abiding, but privacy-loving, citizens soon flood the shadowy landscape of the Dark Web.

 

Cynthia Murrell, March 28, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Retraining the Librarian for the Future

March 28, 2016

The Internet is often described as the world’s biggest library containing all the world’s knowledge that someone dumped on the floor.  The Internet is the world’s biggest information database as well as the world’s biggest data mess.  In the olden days, librarians used to be the gateway to knowledge management but they need to vamp up their skills beyond the Dewey Decimal System and database searching.  Librarians need to do more and Christian Lauersen’s personal blog explains how in, “Data Scientist Training For Librarians-Re-Skilling Libraries For The Future.”

DST4L is a boot camp for librarians and other information professionals to learn new skills to maintain relevancy.  Last year DST4L was held as:

“DST4L has been held three times in The States and was to be set for the first time in Europe at Library of Technical University of Denmark just outside of Copenhagen. 40 participants from all across Europe were ready to get there hands dirty over three days marathon of relevant tools within data archiving, handling, sharing and analyzing. See the full program here and check the #DST4L hashtag at Twitter.”

Over the course of three days, the participants learned about OpenRefine, a spreadsheet-like application that cane be used for data cleanup and transformation.  They also learned about the benefits of GitHub and how to program using Python.  These skills are well beyond the classed they teach in library graduate programs, but it is a good sign that the profession is evolving even if the academia aspects lag behind.

Whitney Grace, March 28, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Play Search the Game

March 25, 2016

Within the past few years, gamers have had the privilege to easily play brand new games as well as the old classics.  Nearly all of the games ever programmed are available through various channels from Steam, simulator, to system emulator.  While it is easy to locate a game if you know the name, main character, or even the gaming system, but with the thousands of games available maybe you want to save time and not have use a search engine.  Good news, everyone!

Sofotex, a free software download Web site, has a unique piece of freeware that you will probably want to download if you are a gamer. Igrulka is a search engine app programmed to search only games.  Here is the official description:

Igrulka is a unique software that helps you to search, find and play millions of games in the network.

“Once you download the installer, all you have to do is go to the download location on your computer and install the app.

Igrulka allows you to search for the games that you love either according to the categories they are in or by name. For example, you get games in the shooter, arcade, action, puzzle or racing games categories among many others.

If you would like to see more details about the available games, their names as well as their descriptions, all you have to do is hover over them using your mouse as shown below. Choose the game you want to play and click on it.”

According to the description, it looks like Igrulka searches through free games and perhaps the classics from systems.  In order to find out what Irgulka can do, download and play search results roulette.

 

Whitney Grace, March 25, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Wikipedia Grants Users Better Search

March 24, 2016

Wikipedia is the defacto encyclopedia to confirm fact from fiction, although academic circles shun its use (however, scholars do use it but never cite it).  Wikipedia does not usually make the news, unless it is tied to its fundraising campaign or Wikileaks releases sensitive information meant to remain confidential.  The Register tells us that Wikipedia makes the news for another reason, “Reluctant Wikipedia Lifts Lid On $2.5m Internet Search Engine Project.”  Wikipedia is better associated with the cataloging and dissemination of knowledge, but in order to use that knowledge it needs to be searched.

Perhaps that is why the Wikimedia Foundation is “doing a Google” and will be investing a Knight Foundation Grant into a search-related project.  The Wikimedia Foundation finally released information about the Knight Foundation Grant, dedicated to provide funds for companies invested in innovative solutions related to information, community, media, and engagement.

“The grant provides seed money for stage one of the Knowledge Engine, described as “a system for discovering reliable and trustworthy information on the Internet”. It’s all about search and federation. The discovery stage includes an exploration of prototypes of future versions of Wikipedia.org which are “open channels” rather than an encyclopedia, analysing the query-to-content path, and embedding the Wikipedia Knowledge Engine ‘via carriers and Original Equipment Manufacturers’.”

The discovery stage will last twelve months, ending in August 2016.  The biggest risk for the search project would be if Google or Yahoo decided to invest in something similar.

What is interesting is that former Wiki worker Jimmy Wales denied the Wikimedia Foundation was working on a search engine via the Knowledge Engine.  Wales has since left and Andreas Kolbe reported in a Wikipedia Signpost article that they are building a search engine and led to believe it would be to find information spread cross the Wikipedia portals, rather it is something much more powerful.

Here is what the actual grant is funding:

“To advance new models for finding information by supporting stage one development of the Knowledge Engine by Wikipedia, a system for discovering reliable and trustworthy public information on the Internet.”

It sounds like a search engine that provides true and verifiable search results, which is what academic scholars have been after for years!  Wow!  Wikipedia might actually be worth a citation now.

 

Whitney Grace, March 24, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Stanford Offers Course Overviewing Roots of the Google Algorithm

March 23, 2016

The course syllabus for Stanford’s Computer Science class titled CS 349: Data Mining, Search, and the World Wide Web on Stanford.edu provides an overview of some of the technologies and advances that led to Google search. The syllabus states,

“There has been a close collaboration between the Data Mining Group (MIDAS) and the Digital Libraries Group at Stanford in the area of Web research. It has culminated in the WebBase project whose aims are to maintain a local copy of the World Wide Web (or at least a substantial portion thereof) and to use it as a research tool for information retrieval, data mining, and other applications. This has led to the development of the PageRank algorithm, the Google search engine…”

The syllabus alone offers some extremely useful insights that could help students and laypeople understand the roots of Google search. Key inclusions are the Digital Equipment Corporation (DEC) and PageRank, the algorithm named for Larry Page that enabled Google to become Google. The algorithm ranks web pages based on how many other websites link to them. John Kleinburg also played a key role by realizing that websites with lots of links (like a search engine) should also be seen as more important. The larger context of the course is data mining and information retrieval.

 

Chelsea Kerwin, March 23, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta