Paywalls Block Pleasure Reading

April 4, 2016

Have you noticed something new in the past few months on news Web sites?  You click on an interesting article and are halfway though reading it when a pop-up banner blocks out the screen.  The only way to continue reading is to enter your email, find the elusive X icon, or purchase a subscription.  Ghacks.net tells us to expect more of these in, “Read Articles Behind Paywalls By Masquerading As Googlebot.”

Big new sites such as the Financial Times, The New York Times, The Washington Post, and The Wall Street Journal are now experimenting with the paywall to work around users’ ad blockers.  The downside is that content will be locked up and sites might lose viewers, but that might be a risk they are willing to take to earn a bigger profit.

There used be some tricks to get around paywalls:

“It is no secret that news sites allow access to news aggregators and search engines. If you check Google News or Search for instance, you will find articles from sites with paywalls listed there.  In the past, news sites allowed access to visitors coming from major news aggregators such as Reddit, Digg or Slashdot, but that practice seems to be as good as dead nowadays.  Another trick, to paste the article title into a search engine to read the cached story on it directly, does not seem to work properly anymore as well as articles on sites with paywalls are not usually cached anymore.”

The best way, the article says, is to make the Web site think you are a Googlebot.  Web sites allow Googlebots roam freely to appear higher in search engine results.  There are a few ways to trick the Web sites into thinking you are a Googlebot based on your Internet browser, Firefox or Chrome.  Check them out, but it will not be long before those become old-fashioned too.

 

Whitney Grace, April 4, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Venture Dollars Point to Growing Demand for Cyber Security

April 4, 2016

A UK cyber security startup has caught our attention — along with that of venture capitalists. The article Digital Shadows Gets $14M To Keep Growing Its Digital Risk Scanning Service from Tech Crunch reports Digital Shadows received $14 million in Series B funding. This Software as a service (SaaS) is geared toward enterprises with more than 1,000 employees with a concern for monitoring risk and vulnerabilities by monitoring online activity related to the enterprise. The article describes Digital Shadows’ SearchLight which was initially launched in May 2014,

“Digital Shadows’ flagship product, SearchLight, is a continuous real-time scan of more than 100 million data sources online and on the deep and dark web — cross-referencing customer specific data with the monitored sources to flag up instances where data might have inadvertently been posted online, for instance, or where a data breach or other unwanted disclosure might be occurring. The service also monitors any threat-related chatter about the company, such as potential hackers discussing specific attack vectors. It calls the service it offers “cyber situational awareness”.”

Think oversight in regards to employees breaching sensitive data on the Dark Web, for example, a bank employee selling client data through Tor. How will this startup fare? Time will tell, but we will be watching them, along with other vendors offering similar services.

 

Megan Feil, April 4, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Semantic Search Craziness Makes Search Increasingly Difficult

April 3, 2016

How is that for a statement? Search is getting hard. No, search is becoming impossible.

For evidence, I point to the Search Today and Beyond: Optimizing for the Semantic Web Wired Magazine article “Search Today and Beyond: Optimizing for the Semantic Web.”

Here’s a passage I noted:

Despite the billions and billions of searches, Google reports that 20 percent of all searches in 2012 were new. It seems quite staggering, but it’s a product of the semantic search rather than the simple keyword search.

Wow, unique queries. How annoying? Isn’t it better for people to just run queries which Google has seen and cached the results?

I have been poking around for information about a US government program called “DCGS.” Enter the query and what do you get? A number of results unrelated to the terms in my query; for example, US Army. Toss in quotes to “tell” Google to focus only on the string DCGS. Nah, does not do the job. Add the filetype:ppt operator and what do you get, documents in other formats too.

Semantic search is now a buzzword which is designed to obfuscate one important point: Methods for providing on point information are less important than assertions about what jargon can deliver.

For me, when I enter a query, I want the search system to deliver documents in which the key words appear. I want an option to see related documents. I do not want the search system doing the ad thing, the cheapest and most economical query, and I don’t want unexpected behaviors from a search and retrieval system.

Unfortunately lots of folks, including Wired Magazine, this that semantic search optimizes. Wonderful. With baloney like this I am not sure about the future of search; to wit:

…the future possibilities are endless for those who are studious enough to keep pace and agile enough to adjust.

Yeah, agile. What happened to the craziness that search is the new interface to Big Data? Right, agile.

Stephen E Arnold, April 3, 2016

Analysis of Microsoft Chatbot Fail

April 3, 2016

I am looking forward to artificial intelligence adventures. I got a bang out of the Google self driving auto running into a bus. I chuckled when I learned that a Microsoft AI demo went off the rails.

If you want to know what happened, I suggest you scan “Poor Software QA Is Root Cause of TAY-FAIL (Microsoft’s AI Twitter Bot).” The write up works through many explanations.

The reason, however, boils down to lousy quality assurance. I would suggest that this explanation is not unique to Microsoft. Why did those construction workers demolish the house that was A OK? I wonder how one can get Microsoft’s smart auto numbering to work.

Pesky humans.

Stephen E Arnold, April 3, 2016

Google Joins Microsoft in the Management Judgment Circus Ring

April 2, 2016

First there was Microsoft and the Tay “learning” experiment. That worked out pretty well if you want a case example of what happens when smart software meets the average Twitter user. Microsoft beat a hasty retreat but expected me to fall for the intelligent API announcements at its home brew conferences.

image

Buy this management reminder poster at this link.

Then we had the alleged April 1 prank from the Alphabet Google thing. Gentle reader, the company eager to solve death created a self driving car which ran into a bus. A more interesting example, however, was the apparently “human” decision to pull a prank on Gmail users.

According to “Google Reverses Gmail April 1 Prank after Users Mistakenly Put GIFs into Important Emails”:

“Today, Gmail is making it easier to have the last word on any email with Mic Drop. Simply reply to any email using the new ‘Send + Mic Drop’ button. Everyone will get your message, but that’s the last you’ll ever hear about it. Yes, even if folks try to respond, you won’t see it,” Google explained when it launched the button on April 1.

Let’s step back from these interesting examples of large companies doing odd duck things and ask this question:

Does financial success and possibly unprecedented market impact improve human decision making?

I would suggest that the science and math club mentality may not scale in the judgment. Whether it is alleged malware techniques to force an old school programmer to write Never10 or creating a situation in which an employee to employee relationship gives new meaning to the joke word “glasshole”, the human judgment angle may need some scrutiny.

Tay was enough for me to consider creating a Tortured Tay segment for this blog to complement Weakly Watson. Alphabet Google’s prank, however, is in a class of its own.

Fiddling with Gmail’s buttons was an idea without merit. Users are on autopilot. Think how users wince when Apple fools with iTunes’ interface. Now shift from an entertainment app to a “real work” app.

Judgment is important. Concentration of user attention requires more than a math club management style. What worked in high school may not work in other situations.

Stephen E Arnold, April 2, 2016

AI Works Really Well. The Microsoft Chatbot Edition

April 2, 2016

I read “How The Internet Turned Microsoft’s AI Chatbot Into A Neo-Nazi.” Now that’s a catchy headline. I understand that artificial intelligence is a great suite of technologies. I know that self driving cars do not get into accidents. Well, mostly. Microsoft’s chat bot Tay is very good.

I learned in this write up:

A key flaw, incredibly, was a simple “repeat after me” game, a call and response exercise that internet trolls used to manipulate Tay into learning hate speech.

Yep, flaws and pesky humans. What can possibly go wrong with smart software?

Stephen E Arnold, April 2, 2016

Xoogler Management Lesson: Annoying Board Members Can Spell Trouble

April 2, 2016

I read “Yahoo CEO Marissa Mayer Downplayed the Biggest Threat Facing the Company — and It Could End Up Getting Her Fired.” Another Xoogler management lesson surfaces. According to the write up:

“She [Ms. Mayer, the Xoogler] viewed [Starboard] as a ‘bit player’ because they owned such a small percentage, that this was a standard ploy for them to garner PR and attention,” one person familiar with the matter recounted of Mayer’s attitude to Starboard’s initial criticisms in 2014. “She did not take them seriously, when it first arose.”

What’s with the ignoring of reality? The answer  I believe is the concept that when one is smart, the reality the smart person perceives is the operative reality for others. Xooglers are an interesting group. Whether creating new search systems (SRCH2) or planning community journalism (AOL), focusing on Xoogler’s perception of reality can have interesting consequences.

The write up reported:

“She believed that Yahoo was competing with Google and Facebook,” this person said. “She was so passionate about the product, and it created a layer of disbelief she had that anyone would question her.”

How wide spread is this characteristic? I would suggest it is the new black.

Stephen E Arnold, April 2, 2016

Alphabet Google Robots: Perfect for the Family and Similar Use Cases? Nope.

April 1, 2016

Editor’s Note: No April Fool’s Day fun.This is as real as it gets.

I read “Google Steps Away from Humanoid Robot PR Problem.” Google, the math and science club outfit, bought Boston Dynamics. This is a robot company which crafts confections for some interesting applications. Does war fighting with autonomous robots ring your chimes? Well, that’s too bad.

Google acquired the nifty robot maker. The gadgets are really cool in my opinion.

What child can resist a couple of robot dogs?

@@@ dog

Is there a pre-school teacher alive unable to turn down the offer of a robot assistant the give the three year olds a cookie and some milk?

Recently there were snaps and a video of a human (always spoilsports when it comes to spiffy technology) trying to topple a Google / Boston Dynamics robot.

@@@ tetherAccording to the write up:

Several robots shaped like humanoids or four-legged creatures were being developed by Boston Dynamics, a robotics company bought by Google for $500 million at the end of 2013. For years, Boston Dynamics has been famous for posting online videos showing its walking robots maintaining their balance despite being kicked and shoved by the company’s human employees. One of the latest videos of the Atlas humanoid robot, published in February 2016, triggered a slew of YouTube comments that described the robot as “terrifying” or referenced Hollywood’s “Terminator” films about an artificial intelligence called Skynet destroying humanity. Such reactions apparently made Google’s public-relations team wary of wading into the online debate, according to Bloomberg News.

The Alphabet Google thing may come to regret its decision.

@@@ fight

What happens if a Boston Dynamics’ robot reads a news story about the terrifying, Skynet future a robot poses? What happens if the robot is self actualized and catches a flight to SFO to resolve the matter?

Yikes. Traffic on 101 will be screwed up that day.

Stephen E Arnold, April 1, 2015

Google Maps: Accuracy Is Relative

April 1, 2016

Editorial comment: Not April Fool bit of spoofery.

I read “Demolition Company Says a Google Maps Error Led Them to Tear Down the Wrong House.” Pesky humans. Google’s autonomous automobiles do not have accidents. When those accidents occur, a human is at fault. Bus drivers, grrrr.

The write up suggests that a Google Map caused a human to instruct demolition workers to level a house at 7601 Cousteau Drive. The human allegedly pointed a finger at Google Maps, a geospatial system which has some big fans in various governmental outfits.,

Here’s what the story asserts:

Google Maps has declined to make a statement, but it did fix the map to pin the correct address.

Yep, what does a “real” journalist expect when it wants to ask about one of Google’s algorithmic services?

Life would be simpler if humans were not getting in the way of Google efficiency, solutions, and services.

Thought: Perhaps one should not use an Android phone and Google Maps to navigate to the edge of the Grand Canyon.

Stephen E Arnold, April 1, 2016

Big Data and Its Fry Cooks Who Clean the Grill

April 1, 2016

I read “Clearing Big Data: Most Time Consuming, Least Enjoyable Data Science Task, Survey Says.” A survey?

According to the capitalist tool:

A new survey of data scientists found that they spend most of their time massaging rather than mining or modeling data.

The point is that few wizards want to come to grips with the problem of figuring out what’s wrong with data in a set or a stream and then getting the data into a form that can be used with reasonable confidence.

Those exception folders, annoying, aren’t they?

The write up points that a data scientist spends 80 percent of his or her time doing housecleaning. Skip the job and the house becomes unpleasant indeed.

The survey also reveals that data scientists have to organize the data to be analyzed. Imagine that. The baloney about automatically sucking in a wide range of data does not match the reality of the survey sample.

Another grim bit of drudgery emerges from the sample which we assume was conducted with the appropriate textbook procedures was that the skills most in demand were for SQL. Yep, old school.

Consider that most of the companies marketing next generation data mining and analytics systems never discuss grunt work and old fashioned data management.

Why the disconnect?

My hunch is that it is the sizzle, not the steak, which sells. Little wonder that some analytics outputs might be lab-made hamburger.

Stephen E Arnold, April 1, 2016

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta