September 5, 2016
Serendipitous information discovery has been attempted through many apps, browsers and more. Attempting a solution, Russia’s giant in online search, Yandex, launched a new feature to their browser. A news release from PR Newswire appeared on 4 Traders entitled Yandex Adds AI-based Personal Recommendations to Browser tells us more. Fueling this feature is Yandex’s personalized content recommendation technology called Zen, which selects articles, videos, images and more for its infinite content stream. This is the first time personally targeted content will appear in new tabs for the user. The press release offers a description of the new feature,
The intelligent content discovery feed in Yandex Browser delivers personal recommendations based on the user’s location, browsing history, their viewing history and preferences in Zen, among hundreds of other factors. Zen uses natural language processing and computer vision to understand the verbal and visual content on the pages the user has viewed, liked or disliked, to offer them the content they are likely to like. To start exploring this new internet experience, all one needs to do is download Yandex Browser and give Zen some browsing history to work with. Alternatively, liking or disliking a few websites on Zen’s start up page will help it understand your preferences on the outset.
The world of online search and information discovery is ever-evolving. For a preview of the new Yandex feature, go to their demo. This service works on all platforms in 24 different countries and in 15 different languages. The design of this feature implies people want to actually read all of their recommended content. Whether that’s the case or not, whether Zen is accurate enough for the design to be effective, time will tell.
Megan Feil, September 5, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/DarkWeb meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/
August 5, 2016
Science fiction is a genre that inspires people to seek the impossible and make it a reality. Many modern inventors, scientists, computer programmers, and even artists contribute their success and careers from inspiration they garnered from the genre. Even search engine Google pulled inspiration from science fiction, but one must speculate how much of Google’s ventures are real or mere fiction? Vanity Fair questions whether or not “Is Google’s BioTech Division The Next Theranos?”
Verily Life Sciences is GoogleX’s biotech division and the company has yet to produce any biotechnology that has revolutionized the medical field. They bragged about a contact lens that would measure blood glucose levels and a wristband that could detect cancer. Verily employees have shared their views about Verily’s projects, alluding that they are more in line to fanning the Google fanfare than producing real products. Other experts are saying that Google is displaying a “Silicon Valley arrogance” along the lines of Theranos.
Theranos misled investors about its “state of the art” technology and is now under criminal investigation. Verily is supposedly different than Theranos:
“Verily, however, is not positioning itself as a company with a salable product like Theranos. Verily ‘is not a products company,’ chief medical officer Jessica Mega argued Monday on Bloomberg TV. ‘But it’s a company really focused on trying to shift the needle when it comes to health and disease.’ That’s a distinction, luckily for Google, that could make all the difference.”
There is also a distinction between fantasy and a reality and counting your chickens before they hatch. Google should be investing in experimentation medical technology that could improve treatment and save lives, but they should not promise anything until they have significant research and even a prototype as proof. Google should discuss their ventures, but not brag about them as if they were a sure thing.
July 22, 2016
The battle between Google and Oracle over Android’s use of Java has gone to federal court, and the trial is expected to conclude in June. CBS San Francisco Bay Area reports, “Former Google CEO Testifies in Oracle-Google Copyright Trial.” The brief write-up reveals the very simple defense of Eric Schmidt, who was Google’s CEO while Android was being developed (and is now CEO of Google’s young parent company, Alphabet): “We believed our approach was appropriate and permitted,” he stated.
Java was developed back in the ‘90s by Sun Microsystems, which was bought by Oracle in 2010. Google freely admits using Java in the development of Android, but they assert it counts as fair use—the legal doctrine that allows limited use of copyrighted material if it is sufficiently transformed or repurposed. Oracle disagrees, though Schmidt maintains Sun Microsystems saw it his way back in the day. The article tells us:
“Schmidt told the jury that when Google was developing Android nine years ago, he didn’t believe the company needed a license from Sun for the APIs. “We believed our approach was appropriate and permitted,” he said.
“Under questioning from Google attorney Robert Van Nest, Schmidt said that in 2007, Sun’s chief executive officer Jonathan Schwartz knew Google was building Android with Java, never expressed disapproval and never said Google needed a license from Sun.
“In cross-examination by Oracle attorney Peter Bicks, Schmidt acknowledged that he had said in 2007 that Google was under pressure to compete with the Apple Inc.’s newly released iPhone.”
Yes it was, the kind of pressure that can erode objectivity. Did Google go beyond fair use in this case? The federal court will soon decide.
Cynthia Murrell, July 22, 2016
There is a Louisville, Kentucky Hidden Web/Dark
Web meet up on July 26, 2016.
Information is at this link: http://bit.ly/29tVKpx.
July 21, 2016
Is big data good only for the hard sciences, or does it have something to offer the humanities? Writer Marcus A Banks thinks it does, as he states in, “Challenging the Print Paradigm: Web-Powered Scholarship is Set to Advance the Creation and Distribution of Research” at the Impact Blog (a project of the London School of Economics and Political Science). Banks suggests that data analysis can lead to a better understanding of, for example, how the perception of certain historical events have evolved over time. He goes on to explain what the literary community has to gain by moving forward:
“Despite my confidence in data mining I worry that our containers for scholarly works — ‘papers,’ ‘monographs’ — are anachronistic. When scholarship could only be expressed in print, on paper, these vessels made perfect sense. Today we have PDFs, which are surely a more efficient distribution mechanism than mailing print volumes to be placed onto library shelves. Nonetheless, PDFs reinforce the idea that scholarship must be portioned into discrete units, when the truth is that the best scholarship is sprawling, unbounded and mutable. The Web is flexible enough to facilitate this, in a way that print could never do. A print piece is necessarily reductive, while Web-oriented scholarship can be as capacious as required.
“To date, though, we still think in terms of print antecedents. This is not surprising, given that the Web is the merest of infants in historical terms. So we find that most advocacy surrounding open access publishing has been about increasing access to the PDFs of research articles. I am in complete support of this cause, especially when these articles report upon publicly or philanthropically funded research. Nonetheless, this feels narrow, quite modest. Text mining across a large swath of PDFs would yield useful insights, for sure. But this is not ‘data mining’ in the maximal sense of analyzing every aspect of a scholarly endeavor, even those that cannot easily be captured in print.”
Banks does note that a cautious approach to such fundamental change is warranted, citing the development of the data paper in 2011 as an example. He also mentions Scholarly HTML, a project that hopes to evolve into a formal W3C standard, and the Content Mine, a project aiming to glean 100 million facts from published research papers. The sky is the limit, Banks indicates, when it comes to Web-powered scholarship.
Cynthia Murrell, July 21, 2016
There is a Louisville, Kentucky Hidden Web/Dark
Web meet up on July 26, 2016.
Information is at this link: http://bit.ly/29tVKpx.
July 14, 2016
Authorities know a bit more about how criminals buy and sell drugs on the dark web, thanks to the cooperation of a captured dealer. DarknetPages’ article, “Dark Web and Clearnet Drug Vendor ‘Shiny Flakes’ Confessed his Crimes,” reveals that the 20-year-old Shiny Flakes, aka Maximilian S., was found with a bevy of illegal drugs, cash, and packaging equipment in his German home. Somehow, the police eventually convinced him to divulge his methods. We learn:
“[Maximilian] actually tried to make money on the internet legally in 2013 by copying fee-based pornographic websites. The thing is that the competition was pretty strong and because of that, he abandoned his idea soon after. So instead of spending the 2 thousand EUR he had at the time on porn, he thought it would be a better idea to spend it on drugs. So he went on to purchase 30 g of cocaine and shrooms from a popular German darknet market dealer and then sold them for a higher price on the dark web….
“Shiny Flakes was really worried about the quality of the drugs he was selling and that is why he always kept an eye on forum posts and read everything that his buyers posted about them. In fact, he took things beyond the opinions on the dark web and actually sent the drugs for testing. The tests conducted were both legally and illegally, with the legal tests taking place at Spain’s Energy Control or at Switzerland’s Safer Party. However, it seems that Maximilian also got in touch with the University of Munich where his products were tested by researchers who were paid in cocaine.”
Sounds efficient. Not only was Mr. Flakes conscientious about product quality, he was also apparently a hard worker, putting in up to 16 hours a day on his business. If only he had stayed on the right side of the law when that porn thing didn’t work out. To give him credit, Flakes had every reason to think he would not be caught; he was careful to follow best practices for staying anonymous on the dark web. Perhaps it was his booming success, and subsequent hiring of associates, that led to Shiny Flakes’ downfall. Whatever the case, authorities are sure to follow up on this information.
Cynthia Murrell, July 14, 2016
June 10, 2016
While many may have the perception Google dominates in many business sectors, a recent graph published shows a different story when it comes to cloud computing. Datamation released a story, Why Google Will Dominate Cloud Computing, which shows Google’s position in fourth. Amazon, Microsoft and IBM are above the search giant in cloud infrastructure services when looking at the fourth quarter market share and revenue growth for 2015. The article explains why Google appears to be struggling,
“Yet as impressive as its tech prowess is, GCP’s ability to cater to the prosaic needs of enterprise cloud customers has been limited, even fumbling. Google has always focused more on selling its own services rather than hosting legacy applications, but these legacy apps are the engine that drives business. Remarkably, GCP customers don’t get support for Oracle software, as they do on Amazon Web Services. Alas, catering to the needs of enterprise clients isn’t about deep genius – it’s about working with others. GCP has been like the high school student with straight A’s and perfect SAT scores that somehow doesn’t have too many friends.”
Despite the current situation, the article hypothesizes Google Cloud Platform may have an edge in the long-term. This is quite a bold prediction. We wonder if Datamation may approach the goog to sell some ads. Probably not, as real journalists do not seek money, right?
Megan Feil, June 10, 2016
June 10, 2016
Libraries are more than place to check out free DVDs and books and use a computer. Most people do not believe this and if you try to tell them otherwise, their eyes glaze offer and they start chanting “obsolete” under their breath. BoingBoing, however, agrees that “How Libraries Can Save The Internet Of Things From The Web’s Centralized Fate”. For the past twenty years, the Internet has become more centralized and content is increasingly reliant on proprietary sites, such as social media, Amazon, and Google.
Back in the old days, the greatest fear was that the government would take control of the Internet. The opposite has happened with corporations consolidating the Internet. Decentralization is taking place, mostly to keep the Internet anonymous. Usually, these are tied to the Dark Web. The next big thing in the Internet is “the Internet of things,” which will be mostly decentralized and that can be protected if the groundwork is laid now. Libraries can protect decentralized systems, because
“Libraries can support a decentralized system with both computing power and lobbying muscle. The fights libraries have pursued for a free, fair and open Internet infrastructure show that we’re players in the political arena, which is every bit as important as servers and bandwidth. What would services built with library ethics and values look like? They’d look like libraries: Universal access to knowledge. Anonymity of information inquiry. A focus on literacy and on quality of information. A strong service commitment to ensure that they are available at every level of power and privilege.”
Libraries can teach people how to access services like Tor and disseminate the information to a greater extent than many other institutes within the community. While this is possible, in many ways it is not realistic due to many factors. Many of the decentralized factors are associated with the Dark Web, which is held in a negative light. Libraries also have limited budgets and trying to install a program like this will need finances, which the library board might not want to invest in. Also comes the problem of locating someone to teach these services. Many libraries are staffed by librarians that are limited in their knowledge, although they can learn.
It is possible, it would just be hard.
June 7, 2016
Should the Dark Web be eradicated? An article from Mic weighs in with an editorial entitled, Shutting Down the Dark Web Is a Plainly Absurd Idea From Start to Finish. Where is this idea coming from? Apparently 71 percent of internet users believe the Dark Web “should be shut down”. This statistic is according to a survey of over 24,000 people from Canadian think tank Centre for International Governance Innovation. The Mic article takes issue with the concept that the Dark Web could be “shut down”,
“The Dark Net, or Deep Web or a dozen other names, isn’t a single set of sites so much as a network of sites that you need special protocols or software in order to find. Shutting down the network would mean shutting down every site and relay. In the case of the private web browser Tor, this means simultaneously shutting down over 7,000 secret nodes worldwide. The combined governments of various countries have enough trouble keeping the Pirate Bay from operating right on the open web, never mind trying to shut down an entire network of sites with encrypted communications and hidden IP addresses hosted worldwide.”
The feasibility of shutting down the Dark Web is also complicated by the fact that there are multiple networks, such as Tor, Freenet or I2P, that allow Dark Web access. Of course, there is also the issue, as the article acknowledges, that many uses of the Dark Web are benign or even to further human rights causes. We appreciated a similar article from Softpedia, which pointed to the negative public perception stemming from media coverage of the takedown child pornography and drug sales site takedowns. It’s hard to know what isn’t reported in mainstream media.
Megan Feil, June 7, 2016
May 19, 2016
Funnelback has been silent as of late, according to our research, but the search company has emerged from the tomb with eyes wide open and a heartbeat. The Funnelback blog has shared some new updates with us. The first bit of news is if you are “Searchless In Seattle? (AKA We’ve Just Opened A New Office!)” explains that Funnelback opened a new office in Seattle, Washington. The search company already has offices in Poland, United Kingdom, and New Zealand, but now they want to establish a branch in the United States. Given their successful track record with the finance, higher education, and government sectors in the other countries they stand a chance to offer more competition in the US. Seattle also has a reputable technology center and Funnelback will not have to deal with the Silicon Valley group.
The second piece of Funnelback news deals with “Driving Channel Shift With Site Search.” Channel shift is the process of creating the most efficient and cost effective way to deliver information access and usage to users. It can be difficult to implement a channel shift, but increasing the effectiveness of a Web site’s search can have a huge impact.
Being able to quickly and effectively locate information on a Web site saves time for not only more important facts, but it also can drive sales, further reputation, etc.
“You can go further still, using your search solution to provide targeted experiences; outputting results on maps, searching by postcode, allowing for short-listing and comparison baskets and even dynamically serving content related to what you know of a visitor, up-weighting content that is most relevant to them based on their browsing history or registered account.
Couple any of the features above with some intelligent search analytics, that highlight the content your users are finding and importantly what they aren’t finding (allowing you to make the relevant connections through promoted results, metadata tweaking or synonyms), and your online experience is starting to become a lot more appealing to users than that queue on hold at your call centre.”
I have written about it many times, but a decent Web site search function can make or break a site. Not only does it demonstrate that the Web site is not professional, it does not inspire confidence in a business. It is a very big rookie mistake to make.
May 5, 2016
Search engine optimization, better known as SEO, is one of the prime tools Web site owners must master in order for their site to appear in search results. A common predicament most site owners find themselves in is that they may have a fantastic page, but if a search engine has not crawled it, the site might as well not exist. There are many aspects to mastering SEO and it can be daunting to attempt to make a site SEO friendly. While there are many guides that explain SEO, we recommend Mattias Geniar’s “A Technical Guide To SEO.”
Some SEO guides get too much into technical jargon, but Geniar’s approach uses plain speak so even if you have the most novice SEO skills it will be helpful. Here is how Geniar explains it:
“If you’re the owner or maintainer of a website, you know SEO matters. A lot. This guide is meant to be an accurate list of all technical aspects of search engine optimisation. There’s a lot more to being “SEO friendly” than just the technical part. Content is, as always, still king. It doesn’t matter how technically OK your site is, if the content isn’t up to snuff, it won’t do you much good.”
Understanding the code behind SEO can be challenging, but thank goodness content remains the most important aspect part of being picked up by Web crawlers. These tricks will only augment your content so it is picked up quicker and you will receive more hits on your site.