Google and Search Trust: Math Is Objective, Right?

November 11, 2017

I trust Google. No, I really trust Google. The reason is that I have a reasonable grasp of the mechanism for displaying search result. I also have developed some okay behaviors when I cannot locate PowerPoint files, PDF files, or find current information from pastesites. I try to look quickly at ads on a page and then discard hits which point to those “relevant” inclusions. I even avoid Google’s free services because these —despite some Xoogler protests — these can and do disappear without warning.

Trust, however, seems to mean different things to different people. Consider the write up “It’s Time to Stop Trusting Google Search Already.” The write up suggests that people usually trust Google. The main point is that those people should not trust Google. I like the “already” too. Very hip. Breezy like almost, gentle reader.

I noted this passage:

Alongside pushing Google to stop “fake news,” we should be looking for ways to limit trust in, and reliance on, search algorithms themselves. That might mean seeking handpicked video playlists instead of searching YouTube Kids, which recently drew criticism for surfacing inappropriate videos.

I find the notion of trusting algorithms interesting. Perhaps the issue is not “algorithms” but:

  1. Threshold values which determine what’s in and what’s out
  2. Data quality
  3. Administrative controls which permit “overrides” by really bright sales “engineers”
  4. The sequence in the work flow for implementing particular algorithms or methods
  5. Inputs from other Google systems which function in a manner similar to human user clicks
  6. Quarterly financial objectives.

Trust is good; knowledge of systems and methods, engineer bias, sequence influence, and similar topics might be more fruitful than this fatalistic viewpoint:

But when something like search screws up, we can’t just tell Google to offer the right answers. We have to operate on the assumption that it won’t ever have them.

By the way, was Google’s search system and method “objective” when it integrated the GoTo, Overture, Yahoo pay to play methods which culminated in the hefty payment to the Yahooligans in 2004? Was Google ever more than “Clever”?

Stephen E Arnold, November 11, 2017

Facebook Image Hashing

November 8, 2017

This is a short post. I read “Revenge Porn: Facebook Teaming Up with Government to Stop Nude Photos Ending Up on Messenger, Instagram.” The method referenced in the write up involves “hashing.” Without getting into the weeds, the approach reminded me of the system and method developed by Terbium Labs for its Matchlight innovation. If you are curious about these techniques, you might want to take a quick look at the Terbium Web site. Based on the write up, it is not clear if the Facebook approach was developed by that company or if a third party was involved. Worth watching how this Facebook attempt to deal with some of its interesting content issues evolves.

Stephen E Arnold, November 8, 2017

Great Moments in Image Recognition: Rifle or Turtle?

November 7, 2017

I read “AI Image Recognition Fooled by Single Pixel Change.” The write up explains:

In their research, Su Jiawei and colleagues at Kyushu University made tiny changes to lots of pictures that were then analyzed by widely used AI-based image recognition systems…The researchers found that changing one pixel in about 74% of the test images made the neural nets wrongly label what they saw. Some errors were near misses, such as a cat being mistaken for a dog, but others, including labeling a stealth bomber a dog, were far wider of the mark.

Let’s assume that these experts are correct. My thought is that neural networks may need a bit of tweaking.

What about facial recognition? I don’t want to elicit the ire of Xooglers, Apple iPhone X users, or the savvy folks at universities honking the neural network horns. Absolutely not. My goodness. What if I at age 74 wanted to apply via LinkedIn and its smart software for a 9 to 5 job sweeping floors?

Years ago I prepared a series of lectures pointing out how widely used algorithms were vulnerable to directed flows of shaped data. Exciting stuff.

The write up explains that the mavens are baffled:

There is certainly something strange and interesting going on here, we just don’t know exactly what it is yet.

May I suggest an assumption that methods work as sci fi and tech cheerleaders say they do is incorrect?

Stephen E Arnold, November 7, 2017

Queries Change Ranking Factors

October 26, 2017

Did you ever wonder how Google determines which Web pages to send to the top of search results?  According to the Search Engine Journal, how Google decides on page rankings depends on the query results-see more in the article: “Google: Top Ranking Factors Change Depending On Query.”  The article contains screenshots of a Twitter conversation between people at Google as they discuss search rankings.

Gary Illyes explains that there are not three ranking factors that apply to all search results.  John Mueller joined the conversation and said that Google’s algorithm’s job is to display the relevant content, but other factors vary.  Mueller also adds that trying to optimize content for ranking factors in simply short-term thinking.  Illyes mentioned that links (backlinking presumably) is not much of a factor either.

In summary:

That’s why it’s important for Google’s algorithms to be able to adjust and recalculate for different ranking signals.

Ranking content based on the same 3 ranking signals at all times would result in Google not always delivering the most ‘relevant’ content to users.

As John Mueller says, at the end of the day that’s what Google search is trying to accomplish.

There is not a magic formula to appear at the top of Google search results.  Content is still key as is paid results too.

Whitney Grace, October 26, 2017

Wave of Fake News Is Proving a Boon for the Need for Humans in Tech

October 20, 2017

We are often the first to praise the ingenious algorithms and tools that utilize big data and search muscle for good. But we are also one of the first to admit when things need to be scaled back a bit. The current news climate makes a perfect argument for that, as we discovered in a fascinating Yahoo! Finance piece, “Fake News is Still Here, Despite Efforts by Google and Facebook.”

The article lays out all the failed ways that search giants like Google and social media outlets like Facebook have failed to stop the flood of fake news. Despite the world’s sharpest algorithms and computer programs, they can’t seem to curb the onslaught of odd news.

The article wisely points out that it is not a computer problem anymore, but, instead, a human one. The solution is proving to be deceptively simple: human interaction.

Facebook said last week that it would hire an extra 1,000 people to help vet ads after it found a Russian agency bought ads meant to influence last year’s election. It’s also subjecting potentially sensitive ads , including political messages, to ‘human review.’

In July, Google revamped guidelines for human workers who help rate search results in order to limit misleading and offensive material. Earlier this year, Google also allowed users to flag so-called ‘featured snippets’ and ‘autocomplete’ suggestions if they found the content harmful.

Bravo, we say. There is a limit to what high powered search and big data can do. Sometimes it feels as if those horizons are limitless, but there is still a home for humans and that is a good thing. A balance of big data and beating human hearts seems like the best way to solve the fake news problem and perhaps many others out there.

Patrick Roland, October 20, 2017

Big Data Might Just Help You See Through Walls

October 18, 2017

It might sound like science fiction or, worse, like a waste of time, but scientists are developing cameras that can see around corners. More importantly, these visual aids will fill in our human blind spots. According to an article in MIT News, “An Algorithm For Your Blind Spot,” it may have a lot of uses, but needs some serious help from big data and search.

According to the piece about the algorithm, “CornerCameras,”

CornerCameras generates one-dimensional images of the hidden sceneA single image isn’t particularly useful since it contains a fair amount of “noisy” data. But by observing the scene over several seconds and stitching together dozens of distinct images, the system can distinguish distinct objects in motion and determine their speed and trajectory.

Seems like a pretty neat tool. Especially, when you consider that this algorithm could help firefighters find people in burning buildings or help bus drivers spot a child running onto the street. However, it is far from perfect.

The system still has some limitations. For obvious reasons, it doesn’t work if there’s no light in the scene, and can have issues if there’s low light in the hidden scene itself. It also can get tripped up if light conditions change, like if the scene is outdoors and clouds are constantly moving across the sun. With smartphone-quality cameras the signal also gets weaker as you get farther away from the corner.

Seems like they have a brilliant idea in need of a big data boost. We can envision a world where these folks partner with big data and search giants to help fill in the gaps of the algorithm and provide a powerful tool that can save lives. Here’s to hoping we’re not the only ones making that connection.

Patrick Roland, October 18, 2017

CEOs AI Hyped but Not Many Deploy It

October 17, 2017

How long ago was big data the popular buzzword?  It was not that long ago, but now it has been replaced with artificial data and machine learning.  Whenever a buzzword is popular, CEOs and other leaders become obsessed with implementing it within their own organizations.  Fortune opens up about the truth of artificial intelligence and its real deployment in the editorial, “The Hype Gap In AI”.

Organization leaders have high expectations for artificial intelligence, but the reality is well below them.  According to a survey cited in the editorial, 85% of executives believe that AI will change their organizations for the better, but only one in five executives have actually implemented AI into any part of their organizations.  Only 39% actually have an AI strategy plan.

Hype about AI and its potential is all over the business sector, but very few really understand the current capabilities.  Even fewer know how they can actually use it:

But actual adoption of AI remains at a very early stage. The study finds only about 19% of companies both understand and have adopted AI; the rest are in various stages of investigation, experimentation, and watchful waiting. The biggest obstacle they face? A lack of understanding —about how to adapt their data for algorithmic training, about how to alter their business models to take advantage of AI, and about how to train their workforces for use of AI.

Organizations view AI as an end-all solution, similar to how big data was the end all solution a few years ago.  What is even worse is that while big data may have had its difficulties, understanding it was simpler than understanding AI.  The way executives believe AI will transform their companies is akin to a science fiction solution that is still very much in the realm of the imagination.

Whitney Grace, October 17, 2017

Skepticism for Google Micro-Moment Marketing Push

October 13, 2017

An article at Street Fight, “The Fallacy of Google’s ‘Micro-Moment’ Positioning,” calls out Google’s “micro-moments” for the gimmick that it is. Here’s the company’s definition of the term they just made up: “an intent-rich moment when a person turns to a device to act on a need—to know, go, do, or buy.” In other words, any time a potential customer has a need and picks up their smartphone looking for a solution. For Street Fight’s David Mihm and Mike Blumenthal, this emphasis seems like a distraction from the failure of Google’s analytics to provide a well-rounded view of the online consumer. In fact, such oversimplification could hurt businesses that buy into the hype. In their dialogue format, they write:

David:[The term “micro-moments”] reduces all consumer buying decisions to thoughtless reflexes, which is just not reality, and drives all creative to a conversion-focused experience, which is only appropriate for specific kinds of keywords or mobile scenarios.  It’s totally IN-appropriate for display or top-of-funnel advertising. I also think it’s intended to create a bizarre sense of panic among marketers — “OMG, we have to be present at every possible instant someone might be looking at their phone!” — which doesn’t help them think strategically or make the best use of their marketing or ad spend.

Mike: I agree. If you don’t have a sound, broad strategy no micro management of micro moments will help. To some extent I wonder if Google’s use of the term reflects the limits of their analytics to yet be able to provide a more complete picture to the business?

David: Sure, Google is at least as well-positioned as Amazon or Facebook to provide closed-loop tracking of purchase behavior. But I think it reflects a longstanding cultural worldview within the company that reduces human behavior to an algorithm. “Get Notification. Buy Thing.” or “See Ad. Buy Thing.”  That may work for the “head” of transactional behavior but the long tail is far messier and harder to predict. Much as Larry Page would like us to be, humans are never going to be robots.

Companies that recognize the difference between consumers and robots have a clear edge in this area, no matter how Google tries to frame the issue. The authors compare Google’s blind spot to Amazon’s ease-of-use emphasis, noting the latter seems to better understand where customers are coming from. They also ponder the recent alliance between Google and Walmart to provide “voice-activated shopping” with a bit of skepticism. See the article for more of their reasoning.

Cynthia Murrell, October 13, 2017

Smart Software with a Swayed Back Pony

October 1, 2017

I read “Is AI Riding a One-Trick Pony?” and felt those old riding sores again. Technology Review nifty new technology old. Bayesian methods date from the 18th century. The MIT write up has pegged Geoffrey Hinton, a beloved producer of artificial intelligence talent, as the flag bearer for the great man theory of smart software.

Dr. Hinton is a good subject for study. But the need to generate clicks and zip in the quasi-academic world of bit time universities may be engaged in “practical” public relations. For example, the write up praises Dr. Hinton’s method of “back propagation.” At the same time, the MIT publication points out the method of neural networks popular today:

you change each of the weights in the direction that best reduces the error overall. The technique is called “backpropagation” because you are “propagating” errors back (or down) through the network, starting from the output.
This makes sense. The idea is that the method allows the real world to be subject to a numerical recipe.

The write up states:

Neural nets can be thought of as trying to take things—images, words, recordings of someone talking, medical data—and put them into what mathematicians call a high-dimensional vector space, where the closeness or distance of the things reflects some important feature of the actual world.

Yes, reality. The way the brain works. A way to make software smart. Indeed a one trick pony which can be outfitted with a silver bridle, a groomed mane and tail, and black liquid shoe polish on its dainty hooves.

The sway back? A genetic weakness. A one trick pony with a sway back may not be able to carry overweight kiddies to the Artificial Intelligence Restaurant, however.

MIT’s write up suggests there is a weakness in the method; specifically:

these “deep learning” systems are still pretty dumb, in spite of how smart they sometimes seem.


Neural nets are just thoughtless fuzzy pattern recognizers, and as useful as fuzzy pattern recognizers can be—hence the rush to integrate them into just about every kind of software—they represent, at best, a limited brand of intelligence, one that is easily fooled.

Software, the article points out that:

And though we’ve started to get a better handle on what kinds of changes will improve deep-learning systems, we’re still largely in the dark about how those systems work, or whether they could ever add up to something as powerful as the human mind.

There is hope too:

Essentially, it is a procedure he calls the “exploration–compression” algorithm. It gets a computer to function somewhat like a programmer who builds up a library of reusable, modular components on the way to building more and more complex programs. Without being told anything about a new domain, the computer tries to structure knowledge about it just by playing around, consolidating what it’s found, and playing around some more, the way a human child does.

We have a braided mane and maybe a combed tail.

But what about that swayed back, the genetic weakness which leads to a crippling injury when the poor pony is asked to haul a Facebook or Google sized child around the ring? What happens if low cost, more efficient ways to create training data, replete with accurate metadata and tags for human things like sentiment and context awareness become affordable, fast, and easy?

My thought is that it may be possible to do a bit of genetic engineering and make the next pony healthier and less expensive to maintain.

Stephen E Arnold, October 1, 2017

New Beyond Search Overflight Report: The Bitext Conversational Chatbot Service

September 25, 2017

Stephen E Arnold and the team at Arnold Information Technology analyzed Bitext’s Conversational Chatbot Service. The BCBS taps Bitext’s proprietary Deep Linguistic Analysis Platform to provide greater accuracy for chatbots regardless of platform.

Arnold said:

The BCBS augments chatbot platforms from Amazon, Facebook, Google, Microsoft, and IBM, among others. The system uses specific DLAP operations to understand conversational queries. Syntactic functions, semantic roles, and knowledge graph tags increase the accuracy of chatbot intent and slotting operations.

One unique engineering feature of the BCBS is that specific Bitext content processing functions can be activated to meet specific chatbot applications and use cases. DLAP supports more than 50 languages. A BCBS licensee can activate additional language support as needed. A chatbot may be designed to handle English language queries, but Spanish, Italian, and other languages can be activated with via an instruction.

Dr. Antonio Valderrabanos said:

People want devices that understand what they say and intend. BCBS (Bitext Chatbot Service) allows smart software to take the intended action. BCBS allows a chatbot to understand context and leverage deep learning, machine intelligence, and other technologies to turbo-charge chatbot platforms.

Based on ArnoldIT’s test of the BCBS, accuracy of tagging resulted in accuracy jumps as high as 70 percent. Another surprising finding was that the time required to perform content tagging decreased.

Paul Korzeniowski, a member of the ArnoldIT study team, observed:

The Bitext system handles a number of difficult content processing issues easily. Specifically, the BCBS can identify negation regardless of the structure of the user’s query. The system can understand double intent; that is, a statement which contains two or more intents. BCBS is one of the most effective content processing systems to deal correctly  with variability in human statements, instructions, and queries.

Bitext’s BCBS and DLAP solutions deliver higher accuracy, and enable more reliable sentiment analyses, and even output critical actor-action-outcome content processing. Such data are invaluable for disambiguating in Web and enterprise search applications, content processing for discovery solutions used in fraud detection and law enforcement and consumer-facing mobile applications.

Because Bitext was one of the first platform solution providers, the firm was able to identify market trends and create its unique BCBS service for major chatbot platforms. The company focuses solely on solving problems common to companies relying on machine learning and, as a result, has done a better job delivering such functionality than other firms have.

A copy of the 22 page Beyond Search Overflight analysis is available directly from Bitext at this link on the Bitext site.

Once again, Bitext has broken through the barriers that block multi-language text analysis. The company’s Deep Linguistics Analysis Platform supports more than 50 languages at a lexical level and +20 at a syntactic level and makes the company’s technology available for a wide range of applications in Big Data, Artificial Intelligence, social media analysis, text analytics,  and the new wave of products designed for voice interfaces supporting multiple languages, such as chatbots. Bitext’s breakthrough technology solves many complex language problems and integrates machine learning engines with linguistic features. Bitext’s Deep Linguistics Analysis Platform allows seamless integration with commercial, off-the-shelf content processing and text analytics systems. The innovative Bitext’s system reduces costs for processing multilingual text for government agencies and commercial enterprises worldwide. The company has offices in Madrid, Spain, and San Francisco, California. For more information, visit

Kenny Toth, September 25, 2017

Next Page »

  • Archives

  • Recent Posts

  • Meta