Google Ads: Helping Users and Developers. Oh, and Maybe Google Too?

May 27, 2021

How is Google changing? Ads everywhere. “Google Will Soon Allow Developers to Advertise Their Android Apps on the Desktop Search” reports about a problem and a very interesting solution:

App developers can face a hard time while trying to advertise their apps on the Google play store and get people to download them, for new developers promoting their app can be a hard battle if they don’t have the right budget and tools for it.
Keeping this problem in mind Google was quick to come up with a creative and really well thought of ideas and tools that will make it easier for developers to advertise their apps much better across the Google eco system.

Love that Chrome and its variants, don’t you? Here’s how the new ad centric revenue maker works:

The Ad campaign feature uses machine learning and artificial intelligence to evaluate and improve the advertisement campaigns, Google machine learning algorithm learns user behavior, location and previous searches which helps targeting the right audience for the advertisements. Now for the very first time Google will be releasing this feature on the desktop version of Google browser.

Web search continues to get better and better at providing Google with clever ways to generate revenue. Do developers have a choice? Sure, there’s the friendly Apple app store. You may not know much about it. Heck, Tim Apple doesn’t know much about how the business works either.

Google? Much simpler. Everything may become an ad. How about relevance? How about bias in smart software? How about that free search system and its super duper results?

Stephen E Arnold, May 27, 2021

Amazon: Fake Reviews Prompt Amazon to Explain Real Reviews

May 27, 2021

Fake reviews are a problem. Need some? Give Fiverr.com a try. In the meantime, Amazon is responding to what is a disinformation challenge. How? The “real” review method is described in “The Secrets of Amazon Reviews: Feedback, Fakes, and the Unwritten Rules of Online Commerce.”

The article quotes a former Amazon reseller as saying:

One of my complaints about Amazon is their inconsistency in enforcing their own terms of service.

Obviously the former reseller does not agree with Malcolm Gladwell (the 10,000 hour expert) who allegedly said:

Consistency is the most overrated of all human virtues… I’m someone who changes his mind all the time.

Amazon’s response presented in the write up is that Amazon is:

“relentless” in its efforts to police customer reviews, with “long-standing policies to protect the integrity of our store, including product authenticity, genuine reviews, and products meeting the expectations of our customers.” “To do this, we use powerful machine learning tools and skilled investigators to analyze over 10 million review submissions weekly, aiming to stop abusive reviews before they are ever published,” an Amazon spokesperson said via email.  Amazon said it takes “swift action” against violators, including suspending or removing selling privileges: “We take this responsibility seriously, monitor our decision accuracy and maintain a high bar.”

Amazon’s policy is clear:

Customer Reviews should give customers genuine product feedback from fellow shoppers. We have a zero tolerance policy for any review designed to mislead or manipulate customers.

The write up includes a list of no nos for reviewers; for example, friends should not review rechargeable products which can catch on fire and omit this minor point.

Check out the policies for resellers operating via Amazon in the US. You can find that missive here.

Like Google and Microsoft, Amazon wants to do better.

Stephen E Arnold, May 27, 2021

Recorded Future: Poking Googzilla?

May 26, 2021

Google and In-Q-Tel were among the first to embrace the start up Recorded Future. Over the years, Recorded Future beavered away in specialist markets. There were some important successes; for example, helpful insights about the Paris Terrorist bombing. But Recorded Future was not a headline grabber. Predictive analytics is not the sort of thing that inflames the real journalists at many “real news” publications. The Googley part of Recorded Future faded over time, and it seems to me that most of the analysts forgot it was around in the first place. Then came the sale of Recorded Future to Insight Partners for about $800 million. From start up to exit in 12 years and another home run for the founders. Now the work begins. The company has to generate more revenue, which has been a challenge for similar companies.

Recorded Future does do search, but it does not do online advertising as a revenue generator. The company has a broad array of services, and it is finding that established competitors like IBM i2, Palantir Technologies, and Verint are also chasing available projects for specialized software. To add a twist to the story, start ups like Trendalyze (an outfit focused on real time analytics) and DataWalk (a better Palantir in my opinion) are snagging work in some rarified niches.

What’s the non Googley Recorded Future doing?

After reading “Thousands of Chrome Extensions Are Tampering with Security Headers,” I think the Insight owned outfit is poking a stick into the zoological park in which Googzilla hunts. My hunch is that Google continues taking off-the-radar actions to ensure that its revenues flow and glow. (No, that’s not on any Google T shirt I possess.) The new Recorded Future is revealing a Google method, and I think some in the Googleplex will not be happy.

The write up does not get into Google’s business strategy. But someone will read the Recorded Future post and do a bit of digging.

Several thoughts:

  1. Has Recorded Future broken an unwritten rule regarding the explanation of Google’s more interesting methods?
  2. Will the Google respond in a way that tweaks the nose of the Recorded Future team?
  3. Will Recorded Future escalate its revelations about the GOOG to get clicks, generate traffic, and possibly make sales?

I have no answers. I think the write up is interesting and probably long overdue. I think this is an important shift which has taken place with a new owner overseeing the once Googley predictive analytics company. Insight probably used the Recorded Future methods to predict the probabilities for upsides and downsides of this type of article. There are margins of error, however.

Stephen E Arnold, May 26, 2021

Google DeepMind: Two High School Science Clubs Arm Wrestle

May 26, 2021

Not Fortnite vs Apple, not Spartans versus some people from the east, and definitely not Wladimir Klitschko fighting Deontay Wilder. Nope this dust up is Google Mountain View (the unit uses an icon of Jeff Dean as its identifier) against Google DeepMind (this science club uses an icon of a humiliated human Go master as its shibboleth).

Mountain View Icon

Deep Mind Icon

a google mountain view icon a goog sad face

The Murdoch real news outfit published “”Google AI Unit Fails to Gain More Autonomy.” You can chase down the dead tree edition for May 22-23, 2021 or cough up some cash and read the report at this link. I noted this passage from the write up:

Senior managers at Google artificial-intelligence unit DeepMind have been negotiating for years with the parent company for more autonomy, seeking an independent legal structure for the sensitive research they do…Google called off those talks…The end of the long-running negotiations, which hasn’t previously been reported, is the latest example of how Google and other tech giants are trying to strengthen their control over the study and advancement of artificial intelligence.

The estimable Murdoch real news outfit notes:

Google bought the London-based startup for about $500 million. DeepMind has about 1,000 staff members, most of them researchers and engineers. In 2019, DeepMind’s pretax losses widened to £477 million, equivalent to about $660 million, according to the latest documents filed with the U.K.’s Companies House registry.

What are the stakes for the high school science club teams? A trophy or a demonstration of how bright people engaged in AI (whatever that means) manifest their software vision?

Several observations:

  1. Money losing gives the Mountain View team an advantage
  2. “Winning” in the mercurial field of smart software depends on the data fed into the algorithms. Humans – particularly science club members – can be somewhat subjective, unpredictable, and – dare I say the word – illogical
  3. The DeepMind science club team appears to value what might be called non-commercial thoughts about smart software. (Smart software, it seems, can be trained like a pigeon to perform in interesting ways, at least according to my psychology textbook which I studied a half century ago. Yep, pigeons. A powerful metaphor too.

This David versus Goliath fight is a facet of the fantastic management acumen demonstrated in the Mountain View handling of ethical AI staff. (Google’s power may have reached the US TV show which reported about AI “issues” without mentioning the standard bearer of algorithmic bias. Does the name Dr. Timnit Gebru sound familiar? It apparently did not to the “60 Minutes” producer.

Net net: Both science club teams are likely to be losers. The victor may be dissenting staff who quit and write about the Google’s scintillating management methods. I expect some start ups to emerge from the staff departures. Venture funds like opportunities. I do like the icons for each team. Are their coffee mugs and T shirts available?

This intra AI tussle may not amount to anything, right?

Stephen E Arnold, May 26, 2021

Microsoft GitHub Embraces Dev Video

May 26, 2021

How easy will it be for frisky developers and programmers to surf on Microsoft GitHub’s new video feature? My hunch is that it will be pretty easy. The news of this Amazon and YouTube type innovation appears in “Video Uploads Now Available across GitHub.”

The write up states:

At GitHub, we’ve utilized video to more concisely detail complex workflows, show our teammates where we’re blocked, and inspire our colleagues with the next big idea. Today, we’re announcing that the ability to upload video is generally available for everyone across GitHub. Now you can upload .mp4 and .mov files in issues, pull requests, discussions, and more.

A number of video sites present fascinating technical information. Some of those videos include helpful pointers to even more interesting content. Here’s an example of a screenshot I made from a YouTube video:

image

The video’s title is “How to Get Sony Vegas Prog 18 for Free *2021* Permanent Activation Pack.” Other services offer similar technical work flow videos.

GitHub is a go to resource for a wide range of content, including penetration testing software similar to that used by some bad actors.

But video is hot, and Microsoft is going for it.

Stephen E Arnold, May 26, 2021

Data Silos vs. Knowledge Graphs

May 26, 2021

Data scientist and blogger Dan McCreary has high hopes for his field’s future, describing what he sees as the upcoming shift “From Data Science to Knowledge Science.” He predicts:

“I believe that within five years there will be dramatic growth in a new field called Knowledge Science. Knowledge scientists will be ten times more productive than today’s data scientists because they will be able to make a new set of assumptions about the inputs to their models and they will be able to quickly store their insights in a knowledge graph for others to use. Knowledge scientists will be able to assume their input features:

  1. Have higher quality
  2. Are harmonized for consistency
  3. Are normalized to be within well-defined ranges
  4. Remain highly connected to other relevant data as such as provenance and lineage metadata”

Why will this evolution occur? Because professionals are motivated to develop their way past the current tedious state of affairs—we are told data scientists typically spend 50% to 80% of their time on data clean-up. This leaves little time to explore the nuggets of knowledge they eventually find among the weeds.

As McCreary sees it, however, the keys to a solution already exist. For example, machine learning can be used to feed high-quality, normalized data into accessible and evolving knowledge graphs. He describes how MarkLogic, where he used to work, developed and uses data quality scores. Such scores would be key to building knowledge graphs that analysts can trust. See the post for more details on how today’s tedious data science might evolve into this more efficient “knowledge science.” We hope his predictions are correct, but only time will tell. About five years, apparently.

Cynthia Murrell, May 26, 2021

The Country Russia and the Company Google: Fair Fight?

May 25, 2021

Sergey Brin’s flight to space did not blast off. Now it seems that Google’s business is mired in a mere nation state’s regulatory bureaucracy. What’s galactic Google to do when a country refuses to be Googley? “Russia Orders Google to Delete Illegal Content or Face Slowdowns” states that Russia’s:

Roskomnadzor internet commission gave the company 24 hours to delete more than 26,000 instances of what it’s classifying as illegal content. If Google doesn’t comply with the order, it could face fines valued at up to 10 percent of its annual revenue, in addition to seeing its services slowed down within the country. The agency has also accused Google of censoring Russian media outlets, including state-owned entities like RT and Sputnik.

Google played a mean game of Boogalah in Australia. I am not sure which combatant triumphed. The upcoming content with the Bear may be more challenging than tossing around a ball covered in kangaroo skin. Hockey and vodka drinking are among the more popular sports in Yakutsk I have heard.

Will Sundar Pichai travel to Russia and perhaps bond with Mr. Putin when he goes camping or horse back riding? I can visualize the two bonding over a camp fire or enjoying a ride about 150 miles northeast of Moscow.

The article explains that Russia has been less than thrilled with some US high technology companies. Furthermore, the country’s government remains squarely focused on earth and has not been willing to kneel before outfits which are galactic.

Getting into a dust up with Russia might be a reason to hire someone to check food deliveries to the Googleplex.

Stephen E Arnold, May 28, 2021

Another Way to Inject Ads into Semi-Relevant Content?

May 25, 2021

It looks like better search is just around the corner. Again. MIT Technology Review proclaims, “Language Models Like GPT-3 Could Herald a New Type of Search Engine.” Google’s PageRank has reigned over online search for over two decades. Even today’s AI search tech works as a complement to that system, used to rank results or better interpret queries. Now Googley researchers suggest a way to replace the ranking system altogether with an AI language model. This new technology would serve up direct answers to user queries instead of supplying a list of sources. Writer Will Douglas Heaven explains:

“The problem is that even the best search engines today still respond with a list of documents that include the information asked for, not with the information itself. Search engines are also not good at responding to queries that require answers drawn from multiple sources. It’s as if you asked your doctor for advice and received a list of articles to read instead of a straight answer. Metzler and his colleagues are interested in a search engine that behaves like a human expert. It should produce answers in natural language, synthesized from more than one document, and back up its answers with references to supporting evidence, as Wikipedia articles aim to do. Large language models get us part of the way there. Trained on most of the web and hundreds of books, GPT-3 draws information from multiple sources to answer questions in natural language. The problem is that it does not keep track of those sources and cannot provide evidence for its answers. There’s no way to tell if GPT-3 is parroting trustworthy information or disinformation—or simply spewing nonsense of its own making.”

The next step, then, is to train the AI to keep track of its sources when it formulates answers. We are told no models are yet able to do this, but it should be possible to develop that capability. The researchers also note the thorny problem of AI bias will have to be addressed for this approach to be viable. Furthermore, as search expert Ziqi Zhang at the University of Sheffield points out, technical and specialist topics often stump language models because there is far less relevant text on which to train them. His example—there is much more data online about e-commerce than quantum mechanics.

Then there are the physical limitations. Natural-language researcher Hanna Hajishirzi at the University of Washington warns the shift to such large language models would gobble up vast amounts of memory and computational resources. For this reason, she believes a language model will not be able to supplant indexing. Which researchers are correct? We will find out eventually. That is ok, we are used to getting ever less relevant search results.

Cynthia Murrell, May 25, 2021

Can Bias Be Eliminated from Medical AI?

May 25, 2021

It seems like a nearly insurmountable problem. Science Magazine reports, “Researchers Call for Bias-Free Artificial Intelligence.” Humans are biased. Humans build AI. It seems like bias and AI are joined. Nevertheless, the stakes in healthcare are high enough that we must try, insist two Stanford University faculty members in a paper recently published in the journal EBioMedicine. We learn:

“Clinicians and surgeons are increasingly using medical devices based on artificial intelligence. These AI devices, which rely on data-driven algorithms to inform health care decisions, presently aid in diagnosing cancers, heart conditions and diseases of the eye, with many more applications on the way. Given this surge in AI, two Stanford University faculty members are calling for efforts to ensure that this technology does not exacerbate existing heath care disparities. In a new perspective paper, Stanford faculty discuss sex, gender and race bias in medicine and how these biases could be perpetuated by AI devices. The authors suggest several short- and long-term approaches to prevent AI-related bias, such as changing policies at medical funding agencies and scientific publications to ensure the data collected for studies are diverse, and incorporating more social, cultural and ethical awareness into university curricula. ‘The white body and the male body have long been the norm in medicine guiding drug discovery, treatment and standards of care, so it’s important that we do not let AI devices fall into that historical pattern.’”

The Science Magazine write-up discusses ways AI is being used in medicine today and how failure to account for race, sex, and socioeconomic status can have disastrous results. Its example is pulse oximeters. Melanin can interfere with their ability to read light passing through skin; they also tend to misstate women’s oxygen levels than men’s. As a result, Blacks and women, and especially Black women, often do not get oxygen when they need it in the hospital.

The article summarizes the paper’s recommendations. One example is to require funding recipients at agencies like the National Institutes of Health to include sex and race as biological variables in their research. Another suggestion is for biomedical publications to set policies that require sex and gender analyses where appropriate. Then there is the idea that would inform medical professionals before they even enter the field—that medical schools include the ways AI can reinforce social inequities in the curriculum. These are all viable options, but will they be enough?

Cynthia Murrell, May 25, 2021

Whistleblower Discusses Fake Account Infestation at Facebook

May 25, 2021

While working at Facebook, Sophie Zhang followed her conscience where her managers and peers failed to go. Naturally, this initiative eventually got her fired. AP News shares the data scientist’s perspective in, “Insider Q&A: Sophie Zhang, Facebook Whistleblower.” Reporter Barbara Ortutay introduces the interview:

“Sophie Zhang worked as a Facebook data scientist for nearly three years before was she fired in the fall of 2020. On her final day, she posted a 7,800-word memo to the company’s internal forum. … In the memo, first published by Buzzfeed, she outlined evidence that governments in countries like Azerbaijan and Honduras were using fake accounts to influence the public. Elsewhere, such as India and Ecuador, Zhang found coordinated activity intended to manipulate public opinion, although it wasn’t clear who was behind it. Facebook, she said, didn’t take her findings seriously. Zhang’s experience led her to a stark conclusion: ‘I have blood on my hands.’ Facebook has not disputed the facts of Zhang’s story but has sought to diminish the importance of her findings.”

If you have not yet seen excerpts from the eye-opening memo or read the full story, we suggest checking out the Buzzfeed and/or Guardian links Ortutay supplies above. In the AP interview, Zhang adds some details. For example, she was apparently fired because the work she did to protect citizens around the world was interfering with her official, low-level duties. She blamed herself, however, for not doing more because she was the only one seeking out and taking down these fake accounts. No one around her seemed to give a hoot unless an outside agency contacted Facebook about a specific page. She states:

“I talked about it internally … but people couldn’t agree on whose job it was to deal with it. I was trying desperately to find anyone who cared. I talked with my manager and their manager. I talked to the threat intelligence team. I talked with many integrity teams. It took almost a year for anything to happen.”

It was actually remarkable how many fake accounts Zhang was able to eliminate on her own, but one employee could only do so much. Especially without the support of higher ups. Though Facebook pays lip service to the issue, Zhang insists they would be doing more if they were really prioritizing the problem. We note this exchange:

“Q: Facebook says it’s taking down many inauthentic accounts and has sought to dismiss your story.

A: So this is a very typical Facebook response, by which I mean that they are not actually answering the question. Suppose your spouse asks you, ‘Did you clean up the dishes yesterday?’ And you respond by saying, ‘I always prioritize cleaning the dishes. I make sure to clean the dishes. I do not want there to be dirty dishes.’ It’s an answer that may make sense, but it does not actually answer that question.”

Indeed.

Cynthia Murrell, May 19, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta