The Future of Visual and Voice Search

October 4, 2017

From the perspective of the digital marketers they are, GeoMarketing ponders,  “How Will Visual and Voice Search Evolve?” Writer David Kaplan consulted Bing Ads’ Purna Virji on what to expect going forward. For example, though companies are not yet doing much to monetize visual search, Virji says that could change as AIs continue to improve their image-recognition abilities. She also emphasizes the potential of visual search for product discovery—If, for example, someone can locate and buy a pair of shoes just by snapping a picture of a stranger’s feet, sales should benefit handsomely. Virji had this to say about traditional, voice, and image search functionalities working together:

A prediction that Andrew Ng had made when he was still with Baidu was that that ‘by 2020, 50 percent of all search will be image or voice.’ Typing will likely never go away. But now, we have more options. Just like mobile didn’t kill the desktop, apps didn’t kill the browser, the mix of visual, voice, and text will combine in ways that are natural extensions of user behavior. We’ll use those tools depending on the specific need and situation at the moment. For example, you could ‘show’ Cortana a picture of a dress in a magazine via your phone camera and say ‘Hey Cortana, I’d love to buy a dress like this,’ and she can go find where to buy it online. In this way, you used voice and images to find what you were looking for.

The interview also touches on the impact of visual search on local marketing and how its growing use in social media offers data analysts a wealth of targeted-advertising potential.

Cynthia Murrell, October 4, 2017

Search and Privacy: A Quick Update

October 3, 2017

In my files, I had a copy of “Duck Duck Go: Illusion of Privacy.” This document comments on the hurdles a public Web search system must jump over in order to deliver privacy. You can find the write up at this link. If you want to test some privacy-oriented search systems, there are some DuckDuckGo.com alternatives. I am not endorsing these outfits; I am passing along some links because within the last couple of years I learned that privacy is part of the marketing for these systems: [a] Ixquick which is now Startpage at www.startpage.com. This is a metasearch engine which means that the user’s query is passed (in theory anonymously to Bing, Google, Yandex, et al). [b] Unbubble.com (Note that this European service asserts “strong privacy.” The link is www.unbubble.eu  [c] Gibiru service (www.gibiru.com) emphasizes anonymous search. Gibiru provides a link to the Firefox Anonymox plug in. But the most recent version of Firefox has been tricky for us, however. My personal view on search anonymization is that when I research my books about cyberosint, the Dark Web, and eDiscovery for cyber intelligence, I assume that I have a number of individuals thrilled with the sites we uncover, write up, and describe in our lectures and webinars. In short, I avoid trying to be “tricky” because I can explain the thousands of queries we run about many exciting topics. See www.xenky.com/darkwebnotebook for a sampler.

Stephen E Arnold, October 3, 2017

Oracle: Sparking the Database Fire

October 3, 2017

Hadoop? Er, what? And Microsoft .SQLServer? Or MarkLogic’s XML, business intelligence, analytics, and search offering? Amazon’s storage complex? IBM’s DB2? The recently-endowed MongoDB?

I thought of these systems when I read “Targeting Cybersecurity, Larry Ellison Debuts Oracle’s New ‘Self-Driving’ Database.”

For me, the main point of the write up is that the Oracle database is coming. There’s nothing like an announcement to keep the Oracle faithful in the fold.
If the write up is accurate, Oracle is embracing buzzy trends, storage that eliminates the guess work, and security. (Remember Secure Enterprise Search, the Security Server, and the nifty credential verification procedures? I do.)
The new version of Oracle, according to the write up, will deliver self driving. Cars don’t do this too well, but the Oracle database will and darned soon.

The 18c Autonomous Database or 18cad will:

  • Fix itself
  • Cost less than Amazon’s cloud
  • Go faster
  • Be online 99.995 percent of the time

And more, of course.

Let’s assume that Oracle 18cad works as described. (Words are usually easier to do than software I remind myself.)

The customers look to be big winners. Better, faster, cheaper. Oracle believes its revenues will soar because happy customers just buy more Oracle goodies.

Will there be  a downside?

What about database administrators? Some organizations may assume that 18cad will allow some expensive database administrator (DBA) heads to roll.
What about the competition? I anticipate more marketing fireworks or at least some open source “sparks” and competitive flames to heat up the cold autumn days.

Stephen E Arnold, October 3, 2017

Antitrust Legislation Insufficient for Information Marketplace

October 3, 2017

At his blog, Continuations, venture capitalist Albert Wenger calls for a new approach to regulating the information market in his piece, “Right Goals, Wrong Tools: EU Antitrust Case Against Google.” Citing this case against Google, he observes that existing antitrust legislation is not up to the task of regulating companies like Google. Instead, he insists, we need solutions that consider today’s realities. He writes:

We need alternative regulatory tools that are more in line with how computation works and why the properties of information tend to lead to concentration. We want networks and network effects to exist because of their positive externalities. Imagine as a counter factual a world of highly fragmented operating systems for smartphones – it would make it extremely difficult for app developers to write apps that work well for everyone (hard enough across iOS and Android). At the same time we want to prevent networks and network effect companies from becoming so powerful and extractive that they stifle innovation. For instance, I have written before about how the app store duopoly has prevented certain kinds of innovation. Antitrust is a sledge hammer that was invented at a time of large industrial companies that had no network effects. Using it now is a bad idea and doubly so because it goes only after Google which has by far the more open mobile operating system when compared to Apple.

Wenger suggests a solution could lie in a requirement for open standards, or in the “right to be represented by a bot.” He points to his 17 minute Ted talk, embedded in the article, for more on his public policy suggestions.

Cynthia Murrell, October 3, 2017

Short Honk: Cyber Weapon Market

October 2, 2017

In November 2017, the focus of Beyond Search and HonkinNews will change. The free information services will increase their coverage of weaponized online. A preview of the type of information we will highlight appears in “Cyber Weapon Market to Reach US$521.87 Billion by the End of 2021.” A summary of the report appears in the article in OpenPR. The news item asserts:

According to TMR, the global cyber weapon market stood at US$390 bn in 2014. Rising at a CAGR of 4.4% CAGR, the market is expected to reach US$521.87 bn by the end of 2021. With a share of 73.8%, the defensive cyber weapon segment dominated the market by type in 2014. Regionally, North America accounted for the leading share of 36% in the global market in 2014.

If the estimate is accurate, there is money in things cyber. Watch for our new report E Discovery for Cyber Intelligence. Previews of the report will appear in our twice a month video program “HonkinNews” starting in six weeks.

Stephen E Arnold, October 2, 2017

Google Supports Outraged Scholars

October 2, 2017

Google has taken issue with a recent list from the Campaign for Accountability (CfA), TechCrunch reports in, “Google Responds to Academic Funding Controversy—with a GIF.” Writer Frederic Lardinois reports that the CfA recently released a list of policy experts and academics who, they say, had received Googley dollars last year. The only problem—many who found themselves on the list dispute their inclusion, saying they had not received any funding from Google or, if they had, it was unrelated to the work the CfA specified. Google issued a response, supporting the protesting experts and academics as well as defending its support of researchers in general. The company also struck back; the article explains:

And in a direct attack on CfA, Google also notes that while the group advocates for transparency, its own corporate funders remain in the shadows. The only backer we know of is Oracle, which is obviously competing with Google in many areas. The group has also recently taken on SolarCity/Tesla. In its blog post, Google also argues that ‘AT&T, the MPAA, ICOMP, FairSearch and dozens more’ fund similar campaigns.

Google later created a GIF in response to requests for elaboration. It shares a series of tweets from some of the affected scholars, in which they detail just where the CfA went wrong in each of their cases. Lardinois continues:

It’s not often that a company like Google makes its own GIF in response to a request for comment, but I gather this goes to show that Google wants to move on from this discussion and let the academics speak for themselves. While the CfA’s methods are less than ideal, there are legitimate questions about how even small amounts of funding can influence research.

So far, Lardinois notes, public discussion on how funding can influence research have centered around pharmaceuticals. He projects it will soon grow, however, to include policy research as tech companies ramp up their funding programs

Cynthia Murrell, October 2, 2017

Facebook: A Pioneer in Bro-giveness?

October 2, 2017

The write up “Mark Zuckerberg Asks for Forgiveness from ‘Those I Hurt This Year’ in Yom Kippur Message” surprised me. In my brief encounters with Silicon Valley “bros”, I cannot recall too many apologies or apologetic moments. My first thought was, “Short circuit somewhere.”

The Verge article explained to me:[Mark Zuckerberg, founder of Facebook] publicly asked for forgiveness for those I hurt this year.

I thought online companies were like utilities. Who gets excited if a water main breaks drowns an elderly person’s parakeet? Who laments when a utility pole short circuits a squirrel? Who worries if an algorithm tries to sell me an iPhone when I am an Android-type senior citizen?

I noted this statement:

Zuckerberg acknowledged that Facebook has had a divisive effect on the country, and that he’ll work to do better in the coming year.

I like New Year’s resolutions.

The write up quotes another Silicon Valley source which I sometimes associate with enthusiasm for what’s new and “important”:

Facebook itself needs to do better to improve its efforts in combating the spread of false information and abuse that appears throughout its platform. It and other social media sites have often touted themselves as a neutral platforms for all ideas and beliefs, but underestimate how these ideals can be undermined, which led to tangible impacts in the real world. Zuckerberg may be sincere in his intentions, but the company he founded needs to follow through on them.

Follow through? Okay.

I think of this commitment to do better as the Silicon Valley equivalet of the New Yorker’s breezy, “Let’s have lunch.”

Is bro-giveness is a disruptive approach to forgoveness? If it is, click the Like button.

Stephen E Arnold, October 2, 2017

Smart Software with a Swayed Back Pony

October 1, 2017

I read “Is AI Riding a One-Trick Pony?” and felt those old riding sores again. Technology Review nifty new technology old. Bayesian methods date from the 18th century. The MIT write up has pegged Geoffrey Hinton, a beloved producer of artificial intelligence talent, as the flag bearer for the great man theory of smart software.

Dr. Hinton is a good subject for study. But the need to generate clicks and zip in the quasi-academic world of bit time universities may be engaged in “practical” public relations. For example, the write up praises Dr. Hinton’s method of “back propagation.” At the same time, the MIT publication points out the method of neural networks popular today:

you change each of the weights in the direction that best reduces the error overall. The technique is called “backpropagation” because you are “propagating” errors back (or down) through the network, starting from the output.
This makes sense. The idea is that the method allows the real world to be subject to a numerical recipe.

The write up states:

Neural nets can be thought of as trying to take things—images, words, recordings of someone talking, medical data—and put them into what mathematicians call a high-dimensional vector space, where the closeness or distance of the things reflects some important feature of the actual world.

Yes, reality. The way the brain works. A way to make software smart. Indeed a one trick pony which can be outfitted with a silver bridle, a groomed mane and tail, and black liquid shoe polish on its dainty hooves.

The sway back? A genetic weakness. A one trick pony with a sway back may not be able to carry overweight kiddies to the Artificial Intelligence Restaurant, however.

MIT’s write up suggests there is a weakness in the method; specifically:

these “deep learning” systems are still pretty dumb, in spite of how smart they sometimes seem.

Why?

Neural nets are just thoughtless fuzzy pattern recognizers, and as useful as fuzzy pattern recognizers can be—hence the rush to integrate them into just about every kind of software—they represent, at best, a limited brand of intelligence, one that is easily fooled.

Software, the article points out that:

And though we’ve started to get a better handle on what kinds of changes will improve deep-learning systems, we’re still largely in the dark about how those systems work, or whether they could ever add up to something as powerful as the human mind.

There is hope too:

Essentially, it is a procedure he calls the “exploration–compression” algorithm. It gets a computer to function somewhat like a programmer who builds up a library of reusable, modular components on the way to building more and more complex programs. Without being told anything about a new domain, the computer tries to structure knowledge about it just by playing around, consolidating what it’s found, and playing around some more, the way a human child does.

We have a braided mane and maybe a combed tail.

But what about that swayed back, the genetic weakness which leads to a crippling injury when the poor pony is asked to haul a Facebook or Google sized child around the ring? What happens if low cost, more efficient ways to create training data, replete with accurate metadata and tags for human things like sentiment and context awareness become affordable, fast, and easy?

My thought is that it may be possible to do a bit of genetic engineering and make the next pony healthier and less expensive to maintain.

Stephen E Arnold, October 1, 2017

« Previous Page

  • Archives

  • Recent Posts

  • Meta