The Golden Age of Radio as Compared to the Internet

April 3, 2017

Here is an article going out to all those old fogies who remember when radio was the main source of news, entertainment, and communication.  Me Shed Society compares the Golden Age of Radio to the continuous information stream known as the Internet and they discuss more in the article, “The Internet Does To The World What Radio Did To The World.”

The author focuses on Marshal McLuhan’s book Understanding Media and its basic idea, “The medium is the message.”  There are three paragraphs that the author found provoking and still relevant, especially in today’s media craze times.  The author suggests that if one were to replace the Hitler references with the Internet or any other influential person or medium, it would be interchangeable.  The first paragraph states that Hitler’s rise to power is due in part to the new radio invention and mass media.  The most profound paragraph is the second:

The power of radio to retribalize mankind, its almost instant reversal of individualism into collectivism, Fascist or Marxist, has gone unnoticed. So extraordinary is this unawareness that it is what needs to be explained. The transforming power of media is easy to explain, but the ignoring of this power is not at all easy to explain. It goes without saying that the universal ignoring of the psychic action of technology bespeaks some inherent function, some essential numbing of consciousness such as occurs under stress and shock conditions.

The third paragraph concludes that there should be some way to defend against media fallout, such as education and its foundations in dead tree formats, i.e. print.

Print, however, is falling out of favor, at least when it comes to the mass media, and education is built more on tests and meeting standards than fighting hysteria.  Let us add another “-ism” to this list with the “extreme-ism” that runs rampant on the TV and the Internet.

Whitney Grace, April 3, 2017

Parlez Vous Qwant, Nest-Ce Pas?

March 2, 2017

One of Google’s biggest rivals is Yandex, at least in Russia.  Yandex is a Russian owned and operated search engine and is more popular in Russia than the Google, depending on the statistics.  It goes to say that a search engine built and designed by native speakers does have a significant advantage over foreign competition, and it looks like France wants a chance to beat Google.  Search Engine Journal reports that, “Qwant, A French Search Engine, Thinks It Can Take On Google-Here’s Why.”

Qwant was only founded in 2013 and it has grown to serve twenty-one million monthly users in thirty countries.  The French search engine has seen a 70% growth each year and it will see more with its recent integration with Firefox and a soon-to-be launched mobile app.  Qwant is very similar to DuckDuckGo in that it does not collect user data.  It also boasts mote search categories than news, images, and video and these include, music, social media, cars, health, music, and others.  Qwant had an interesting philosophy:

The company also has a unique philosophy that artificial intelligence and digital assistants can be educated without having to collect data on users. That’s a completely different philosophy than what is shared by Google, which collects every bit of information it can about users to fuel things like Google Home and Google Allo.

Qwant still wants to make a profit with pay-per-click and future partnerships with eBay and TripAdvisor, but they will do without compromising a user’s privacy.  Qwant has a unique approach to search and building AI assistants, but it has a long way to go before it reaches Google heights.

They need to engage more users not only on laptops and computers but also mobile devices.  They also need to form more partnerships with other browsers.

Bon chance, Qwant!  But could you share how you plan to make AI assistants without user data?

Whitney Grace, March 2, 2017

 

The Pros and Cons of Human Developed Rules for Indexing Metadata

February 15, 2017

The article on Smartlogic titled The Future Is Happening Now puts forth the Semaphore platform as the technology filling the gap between NLP and AI when it comes to conversation. The article posits that in spite of the great strides in AI in the past 20 years, human speech is one area where AI still falls short. The article explains,

The reason for this, according to the article, is that “words often have meaning based on context and the appearance of the letters and words.” It’s not enough to be able to identify a concept represented by a bunch of letters strung together. There are many rules that need to be put in place that affect the meaning of the word; from its placement in a sentence, to grammar and to the words around – all of these things are important.

Advocating human developed rules for indexing is certainly interesting, and the author compares this logic to the process of raising her children to be multi-lingual. Semaphore is a model-driven, rules-based platform that allows us to auto-generate usage rules in order to expand the guidelines for a machine as it learns. The issue here is cost. Indexing large amounts of data is extremely cost-prohibitive, and that it before the maintenance of the rules even becomes part of the equation. In sum, this is a very old school approach to AI that may make many people uncomfortable.

Chelsea Kerwin, February 15, 2017

Hacks to Make Your Google Dependence Even More Rewarding

January 24, 2017

The article on MakeUseOf titled This Cool Website Will Teach You Hundreds of Google Search Tips refers to SearchyApp, a collection of tricks, tips, and shortcuts to navigate Google search more easily. The lengthy list is divided into sections to be less daunting to readers. The article explains,

What makes this site so cool is that the tips are divided into sections, so it’s easy to find what you want. Here are the categories: Facts (e.g. find the elevation of a place, get customer service number,…) Math (e.g. solve a circle, use a calculator, etc.), Operators (search within number range, exclude a keyword from results, find related websites, etc.), Utilities (metronome, stopwatch, tip calculator, etc.), Easter Eggs (42, listen to animal sounds, once in a blue moon, etc.).

The Easter Eggs may be old news, but if you haven’t looked into them before they are a great indicator of Google’s idea of a hoot. But the Utilities section is chock full of useful little tools from dice roller to distance calculator to converting units to translating languages. Also useful are the Operators, or codes and shortcuts to tell Google what you want, sometimes functioning as search restrictions or advanced search settings. Operators might be wise to check out for those of us who forgot what our librarians taught us about online search as well.

Chelsea Kerwin, January 24, 2017

Google Popular Times Now in Real Time

January 20, 2017

Just a quick honk about a little Google feature called Popular Times. LifeHacker points out an improvement to the tool in, “Google Will Now Show You How Busy a Business Is in Real Time.” To help users determine the most efficient time to shop or dine, the feature already provided a general assessment of businesses’ busiest times. Now, though, it bases that information on real-time metrics. Writer Thorin Klosowski specifies:

The real time data is rolling out starting today. You’ll see that it’s active if you see a ‘Live’ box next to the popular times when you search for a business. The data is based on location data and search terms, so it’s not perfect, but will at least give you a decent idea of whether or not you’ll easily find a place to sit at a bar or how packed a store might be. Alongside the real-time data comes some other info, including how long people stay at a location on average and hours by department, which is handy when a department like a pharmacy or deli close earlier the rest of a store.

Just one more way Google tries to make life a little easier for its users. That using it provides Google with even more free, valuable data is just a side effect, I’m sure.

Cynthia Murrell, January 20, 2017

Cybersecurity Technology and the Hacking Back Movement

December 19, 2016

Anti-surveillance hacker, Phineas Fisher, was covered in a recent Vice Motherboard article called, Hacker ‘Phineas Fisher’ Speaks on Camera for the First Time—Through a Puppet. He broke into Hacking Team, one of the companies Vice called cyber mercenaries. Hacking team and other firms sels hacking and surveillance tools to police and intelligence agencies worldwide. The article quotes Fisher saying,

I imagine I’m not all that different from Hacking Team employees, I got the same addiction to that electronic pulse and the beauty of the baud [a reference to the famous Hacker’s manifesto]. I just had way different experiences growing up. ACAB [All Cops Are Bastards] is written on the walls, I imagine if you come from a background where you see police as largely a force for good then writing hacking tools for them makes some sense, but then Citizen Lab provides clear evidence it’s being used mostly for comic-book villain level of evil. Things like spying on journalists, dissidents, political opposition etc, and they just kind of ignore that and keep on working. So yeah, I guess no morals, but most people in their situation would do the same. It’s easy to rationalize things when it makes lots of money and your social circle, supporting your family etc depends on it.

The topics of ethical and unethical hacking were discussed in this article; Fisher states the tools used by Hacking Team were largely used for targeting political dissidents and journalists. Another interesting point to note is that his evaluation of Hacking Team’s software is that it “works well enough for what it’s used for” but the real value it offers is “packaging it in some point-and-click way.” An intuitive user experience remains key.

Megan Feil, December 19, 2016

Algorithmic Selling on Amazon Spells Buyer Beware

December 12, 2016

The article on Science Daily titled Amazon Might Not Always Be Pitching You the Best Prices, Researchers Find unveils the stacked deck that Amazon has created for sellers. Amazon rewards sellers who use automated algorithmic pricing by more often featuring those seller’s items in the buy box, the more prominent and visible display. So what is algorithmic pricing, exactly? The article explains,

For a fee, any one of Amazon’s more than 2 million third-party sellers can easily subscribe to an automated pricing service…They then set up a pricing strategy by choosing from a menu of options like these: Find the lowest price offered and go above it (or below it) by X dollars or Y percentage, find Amazon’s own price for the item and adjust up or down relative to it, and so on. The service does the rest.

For the consumer, this means that searching on Amazon won’t necessarily produce the best value (at first click, anyway.) It may be a mere dollar difference, but it could also be a more significant price increase between $20 and $60. What is really startling is that even though less than 10% of “algo sellers,” these sellers account for close to a third of the best-selling products. If you take anything away from this article, let it be that what Amazon is showing you first might not be the best price, so always do your research!

Chelsea Kerwin, December 12, 2016

Emphasize Data Suitability over Data Quantity

November 30, 2016

It seems obvious to us, but apparently, some folks need a reminder. Harvard Business Review proclaims, “You Don’t Need Big Data, You Need the Right Data.” Perhaps that distinction has gotten lost in the Big Data hype. Writer Maxwell Wessel points to Uber as an example. Though the company does collect a lot of data, the key is in which data it collects, and which it does not. Wessel explains:

In an era before we could summon a vehicle with the push of a button on our smartphones, humans required a thing called taxis. Taxis, while largely unconnected to the internet or any form of formal computer infrastructure, were actually the big data players in rider identification. Why? The taxi system required a network of eyeballs moving around the city scanning for human-shaped figures with their arms outstretched. While it wasn’t Intel and Hewlett-Packard infrastructure crunching the data, the amount of information processed to get the job done was massive. The fact that the computation happened inside of human brains doesn’t change the quantity of data captured and analyzed. Uber’s elegant solution was to stop running a biological anomaly detection algorithm on visual data — and just ask for the right data to get the job done. Who in the city needs a ride and where are they? That critical piece of information let the likes of Uber, Lyft, and Didi Chuxing revolutionize an industry.

In order for businesses to decide which data is worth their attention, the article suggests three guiding questions: “What decisions drive waste in your business?” “Which decisions could you automate to reduce waste?” (Example—Amazon’s pricing algorithms) and “What data would you need to do so?” (Example—Uber requires data on potential riders’ locations to efficiently send out drivers.) See the article for more notes on each of these guidelines.

Cynthia Murrell, November 30, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Black-Hat SEO Tactics Google Hates

November 16, 2016

The article on Search Engine Watch titled Guide to Black Hat SEO: Which Practices Will Earn You a Manual Penalty? follows up on a prior article that listed some of the sob stories of companies caught by Google using black-hat practices. Google does not take kindly to such activities, strangely enough. This article goes through some of those practices, which are meant to “falsely manipulate a website’s search position.”

Any kind of scheme where links are bought and sold is frowned upon, however money doesn’t necessarily have to change hands… Be aware of anyone asking to swap links, particularly if both sites operate in completely different niches. Also stay away from any automated software that creates links to your site. If you have guest bloggers on your site, it’s good idea to automatically Nofollow any links in their blog signature, as this can be seen as a ‘link trade’.

Other practices that earned a place on the list include automatically generated content, cloaking and irrelevant redirects, and hidden text and links. Doorway pages are multiple pages for a key phrase that lead visitors to the same end destination. If you think these activities don’t sound so terrible, you are in great company. Mozilla, BMW, and the BBC have all been caught and punished by Google for such tactics. Good or bad? You decide.

Chelsea Kerwin, November 16, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Google Gives Third Day Keynote at Pubcon

November 1, 2016

Technology conferences are the thing to do when you want to launch a product, advertise a new business, network, or get a general consensus about the tech industry.  There are multiple conferences revolving around different aspects in the tech industry held each month.  In October 2016, Pubcon took place in Las Vegas, Nevada and they had a very good turn out.  The thing that makes a convention, though, is the guests.  Pubcon did not disappoint as on the third day, Google’s search expert Gary Illyes delivered the morning keynote.  (Apparently, Illyes also hold the title Chief of Sunshine and Happiness at Google).  Outbrain summed up the highlights of Pubcon 2016’s third day in “Pubcon 2016 Las Vegas: Day 3.”

Illyes spoke about search infrastructure, suggesting that people switch to HTTPS.  His biggest push for HTTPS was that it protected users from “annoying scenarios” and it is good for UX.  Google is also pushing for more mobile friendly Web sites.  It will remove “mobile friendly” from search results and AMP can be used to make a user-friendly site.  There is even bigger news about page ranking in the Google algorithm:

Our systems weren’t designed to get two versions of the same content, so Google determines your ranking by the Desktop version only. Google is now switching to a mobile version first index. Gary explained that there are still a lot of issues with this change as they are losing a lot of signals (good ones) from desktop pages that are don’t exist on mobile. Google created a separate mobile index, which will be its primary index. Desktop will be a secondary index that is less up to date.

As for ranking and spam, Illyes explained that Google is using human evaluators to understand modified search better, Rankbrain was not mentioned much, he wants to release the Panda algorithm, and Penguin will demote bad links in search results.  Google will also release “Google O for voice search.

It looks like Google is trying to clean up search results and adapt to the growing mobile market, old news and new at the same time.

Whitney Grace, November 1, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta