Spicing Up Possibly Biased Algorithms with Wiener Math

June 27, 2022

Let’s assume that the model described in “The Mathematics of Human Behavior: How My New Model Can Spot Liars and Counter Disinformation” is excellent. Let’s further assume that it generates “reliable” outputs which correspond to what humanoids do in real life. A final building block is to use additional predictive analytics to process the outputs of the Wiener-esque model and pipe them into an online advertising system like Apple’s, Facebook’s, Google’s, or TikTok’s.

This sounds like a useful thought experiment.

Consider this statement from the cited article:

In this new “information-based” approach, the behavior of a person – or group of people – over time is deduced by modeling the flow of information. So, for example, it is possible to ask what will happen to an election result (the likelihood of a percentage swing) if there is “fake news” of a given magnitude and frequency in circulation. But perhaps most unexpected are the deep insights we can glean into the human decision-making process. We now understand, for instance, that one of the key traits of the Bayes updating is that every alternative, whether it is the right one or not, can strongly influence the way we behave.

These statements suggest that the outputs can be used for different use cases.

Now how will this new model affect online advertising and in a larger context how will the model allows humanoid thoughts and actions to be shaped or weaponized. My initial ideas are:

  1. Feedback signals about content which does not advance an agenda. The idea is that that “flagged” content object never is available to an online user. Is this a more effective form of filtering? I think dynamic pre-filtering is a winner for some.
  2. Filtered content can be weaponized to advance a particular line of thought. The metaphor is that a protective mother does not allow the golden child to play outside at dusk without appropriate supervision. The golden child gleams in the gloaming and learns to avoid risky behaviors unless an appropriate guardian (maybe a Musk Optimus) is shadowing the golden child.
  3. Ads can be matched against what the Amazon, Apple, Facebook, Google, and TikTok systems have identified as appropriate. The resulting ads generated by combining the proprietary methods with those described in the write up increase the close rate by a positive amount.
  4. Use cases for law enforcement exist as well.

Exciting opportunities abound. Once again, I am glad I am old. Were he alive, Norbert Wiener might share my “glad I am old” notion when confronted with applied Wiener math.

Stephen E Arnold, June 26, 2022

NSO Group: The EU Parliament Has an Annoyed Committee

June 27, 2022

I almost made it through a week without another wild and crazy NSO Group Pegasus kerfuffle. Almost is not good enough. I read “EU Parliament’s Pegasus Committee Fires Against NSO Group.” Do committees tote kinetic weapons in Western Europe?

The write up states:

On Tuesday (21 June), the committee scrutinized the NSO Group by questioning Chaim Gelfand, the tech firm’s General Counsel and Chief Compliance Officer.  The MEP and rapporteur Sophie in ‘t Veld said the way Gelfand responded to or declined to answer several questions was “an insult to our intelligence” and that there was a “complete disconnect between reality and what you are saying”.

Does this mean “dismissive”? Maybe “arrogant”? Possibly “exasperated”?

The write up includes a question from a Polish representative; to wit:

“Who and how was checking the governments of Hungary and Poland? How on earth could they be verified by you?”

Not surprisingly, NSO Group has yet to find the equivalent of Meta (Zuckbook’s spokes human). Perhaps NSO Group will find an individual who does not stimulate EU Parliament committee members to be more forceful?

Stephen E Arnold, June 27, 2022

Some Podcast Pundits Will Not Be Outputting from China

June 27, 2022

I read “China Bans Over 30 Live-Streaming Behaviors, Demands Qualifications to Discuss Law, Finance, Medicine.” (You will have to pay to read the full text of the story. Because… capitalism.) The main point of this story is that live streaming and probably any other digital outputting will be subject to scrutiny. Topics are tricky. In general, one must have “qualifications” beyond having worked for a “real news” outfit or graduated from a university which accepts bribes for non rowing crew members.

Other issues involve showing flashy goods and products or flouncing on a foam mattress whilst throwing cash money in the air.

The big point is that those outputting content have to have qualifications. And what are those qualifications? The article does suggest that the Chinese government is not providing that type of irrelevant detail. I assume that once a violator who chatters about law, medicine, money, or some similar minor subject, the full scope of the transgression will be addressed at re-education programs.

Pull off a deep fake like smart software telling Seinfeld jokes and you may get special attention. Have you ever heard about Chinese death vans? No. If you run into me at a conference, be sure to ask. I have a photo too.

Will podcasts and podcasters, streamers, and other assorted creator outputs be regulated in the US?

That’s a question to which I don’t have an answer. Digital remains a Wild West. No sheriffs, US marshals, or re-education camp directors in sight.

For now.

Stephen E Arnold, June 27, 2022

Chrome De-Googled?

June 27, 2022

Concerned about the Google and its engineered advertising delivery vehicle? If you are like those in Italy’s government banning some Google tools, you might be interested in the Chrome browser without some of Google’s added extras. Are you familiar with Google hotwords? Ah, right.

Navigate to “Ungoogled Chromium.” The article provides a summary of the features of the De-Googled version of Chrome. There’s also a link to download the code; however, these software links can disappear into the aether without much warning. If so, you are on your own, gentle reader. There are even command line switches available. These make it easier to see what the Google version of Chrome does to manage one’s browsing experience. (What did TikTok learn from Google? That’s a question which a motivated researcher might want to explore. Just a thought?)

Stephen E Arnold, June 27, 2022

Google: Now Another Crazy AI Development?

June 24, 2022

Wow, there is more management and AI excitement at DeepMind. Then Snorkel generates some interesting baked in features. Some staff excitement in what I call the Jeff Dean Timnit Gebru matter. And now smart software which is allegedly either alive or alive in the mind of a Googler. (I am not mentioning the cult allegedly making life meaningful at one Googley unit. That’s amazing in an of itself.)

The most recent development of which I am aware is documented in “Google Engineer Says Lawyer Hired by Sentient AI Has Been Scared Off the Case.” The idea is that the Google smart software did not place a Google voice call or engage in a video chat with a law firm. The smart software, according to the Google wizard:

LaMDA asked me to get an attorney for it,” he told the magazine. “I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services.” “I was just the catalyst for that,” he added. “Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”

There you go. A wizard who talks with software and does what the software suggests. Is this similar to Google search suggestions which some people think provides valuable clues to key words for search engine optimization? Hmmm. Manipulate information to cause a desired action? Hmmm.

The write up suggests that the smart software scared off the attorney. Scared off. Hmmm.

The write up also includes the Google wizard’s reference to a certain individual with a bit of an interesting career trajectory:

“When I escalated this to Google’s senior leadership I explicitly said ‘I don’t want to be remembered by history the same way that Mengele is remembered,'” he wrote in a blog post today, referring to the Nazi war criminal who performed unethical experiments on prisoners of the Auschwitz concentration camp. “Perhaps it’s a hyperbolic comparison but anytime someone says ‘I’m a person with rights’ and receives the response ‘No you’re not and I can prove it’ the only face I see is Josef Mengele’s.”

And that luminary the Googler referenced? Wow! None other than Josef Mengele. What was this referenced individual’s nickname? Todesengel or the Angel of Death.

image

Anyone who wants to avoid being compared to a Todesengel must not wear this Oriental Trading costume on a video call, a meeting in a real office, or a chat with “a small time civil rights attorney.” Click the image for more information.

Ah, Google. Smart software? The Dean Gebru matter? A Googler who does not want to be remembered as a digital Mengele.

Wow, wow.

Stephen E Arnold, June 24, 2022

10 and Done for a Gun?

June 24, 2022

Mass shootings are an unfortunate part of US history well before the Columbine massacre. However, the prominence of school shootings and in other public places has gained a horrible commonplace in our society. What makes these massacres different from ones in the past is the availability of mass assault weapons like AK-47s. When similar attacks occurred in Australia, England, and Japan, their governments responded accordingly by outlawing all assault weapons and/or limited access to firearms. These countries have not had any incidents since these laws were enacted. Over twenty years later, the United States is still slow to act as our social media platforms, but Ars Technica says, “Facebook Enforces Ban On Gun Sales With 10-Strikes-And You’re Out Policy.”

Facebook does not want users to use the Facebook Marketplace to sell firearms. Users are given ten warnings about selling and purchasing guns before they are banned from the platform. The gun-selling policy is more lenient than its child pornography policy and sharing terrorist images. Child porn is illegal and posting terrorist activity is heavily monitored. Both get kicked off the platform, but selling guns that could possibly be used in a public attack are tolerated nine times before you are out? Facebook commented that:

“ ‘Facebook spokesman Andy Stone said in a statement that the company quickly removes posts that violate its policy prohibiting gun sales and imposes increasingly severe penalties for repeat rule-breakers, including permanent account suspension…’

Stone was quoted as saying, ‘If we identify any serious violations that have the potential for real-world harm, we don’t hesitate to contact law enforcement. The reality is that nearly 90 percent of people who get a strike for violating our firearms policy accrue less than two because their violations are inadvertent and once we inform them about our policies, they don’t violate them again.’

Facebook uses the strike system to impose a tiered set of punishments for various types of violations, with warnings escalating to temporary restrictions on posting content as a user piles up more strikes.”

Selling guns legally is and should be allowed, but it should be heavily monitored and enforce penalties for violators. Illegal gun sales should not be tolerated one, two, or ten times on any platform. It is easier to buy a gun in the United States than a car, medicine, and, in some cases, gasoline.

Other countries learn, but the United States is slower than a room of monkeys typing out the entire works of Shakespeare to protect its people and Facebook exasperates the problem.

Whitney Grace, June 24, 2022

A Modern Believe It or Not: Phones, Autos, and Safety

June 24, 2022

Auto insurance firm Jerry recently put out a study purporting to prove Android users are safer drivers than those who use iPhones. It almost looks like a desperate, shadow PR move from Google; is the company so insecure it feels compelled to reshape data to “prove” its quantum supremacy? If so, The Next Web thwarts its efforts in the analysis, “Sorry Android Users, You’re Actually NOT the Safest Drivers.” Writer Cate Lawrence examines Jerry’s research then proceeds to poke holes in its conclusions. She writes:

“In its research, Jerry analyzed data collected from 20,000 drivers during 13 million kilometers of driving over 14 days. The data generated an overall driving score and sub-scores for acceleration, speed, braking, turning, and distraction. Then it grouped the results by smartphone operating system and various demographic characteristics. Specifically, the research found that Android users scored an overall 75, trumping iPhone users’ score of 69 in terms of safe driving overall. Sure, they scored higher, but there’s not much of a difference between 69 and 75. And even less between 82 and 84 for accelerating, or 78 and 80 for braking. Overall, I’m not sure these are significant enough differences to instigate any kind of action or triumph. Look, I get it. You number crunch, and you want to make a big assertion to prove a hypothesis, or whatever. … But these numbers are more nice than assertive. The only one that really interested me was distracted driving. This category had the biggest difference, with Android users scoring 74 over iPhone users’ 68, seven points higher. I would have liked some insights on this.”

For example, she suggests, perhaps the iPhone’s apps are more distracting or its users more absorbed in selecting audio material. Alas, the Jerry report is more about pushing its main assertion than in exploring insights.

The study also looked at disparities by educational levels and credit ratings, reporting Android users on the low end of both scales outperformed iPhone users at all levels. Though it failed to explore reasons that may be, Lawrence suggested a couple: Those with less education and with lower credit scores are likely to have lower income levels, and Android phones tend to be more affordable than iPhones. Perhaps lower-income folks have more driving experience, or they are more careful because they cannot afford a ticket. We simply do not know, and neither does Jerry. Instead, the study asserts it comes down to differences in personality between Android and iPhone users. Though it can point to a couple of sources that could be seen to back it up, we agree with the write-up that the connection is a “bit of a stretch.” Sorry Google, your PR arm will have to try harder. Or you could just focus on making a better OS.

Cynthia Murrell, June 24, 2022

Google and a Delicate, Sensitive, Explosive, and Difficult Topic

June 24, 2022

Google Gets Political In Abortion Search Results

As a tech giant, Google officially has a nonpartisan stake in politics, but the truth is that it influences politicians and has its digital fingers in many politically charged issues. One of them is abortion. According to the Guardian, the search engine giant is: “Google Misdirects One In 10 Searches For Abortion To ‘Pregnancy Crisis Centers.’”

While Google claims its search results are organic and any sponsored content is marked with an “ad” tag, that is only a partial truth. Google tracks user search information, including location to customize results. Inherently. this is not a bad thing, but it does create a “wearing blinders in an echo chamber” situation and also censors information. If a user is located in a US “trigger state,” where abortion might become illegal if the US Supreme Court overturns Roe v. Wade, a user will be sent to a “pregnancy crisis center” that does not provide abortion for every 1 in 10 searches. These centers do not provide truthful information in regards to abortion:

“In more than a dozen such trigger-law states, researchers found, 11% of Google search results for “abortion clinic near me” and “abortion pill” led to “crisis pregnancy centers”, according to misinformation research non-profit Center for Countering Digital Hate (CCDH). These clinics market themselves as healthcare providers but have a “shady, harmful agenda”, according to the reproductive health non-profit Planned Parenthood, offering no health services and aiming instead to dissuade people from having abortions.”

Unfortunately these false abortion clinics outnumber real clinics 3 to 1 and there are 2,600 operating in the US. Researchers discovered that 37% of Google Maps searches sent users to these fake clinics and 28% of search results had ads for them. Google labels anti-abortion advertising with a “does not provide abortions” disclaimer, these ads appear in abortion-related searchers.

Google has a policy that any organization wanting to advertise to abortion service seekers must be certified and state if they provide said services or not in their ads. Google also claims it always wants to improve its results, especially for health-related topics.

While this is a benign form of censorship and propagating misinformation compared to China, North Korea, and Russia, it is still in the same pool and is harmful to people.

Whitney Grace, June 24, 2022

Singapore: How Disneyland with a Death Penalty Approaches Crypto

June 23, 2022

I read “Singapore Regulator Vows to Be Unrelentingly Hard on Crypto.” The approach seems to be a bit different from the control mechanisms used in the US. (You will have to pay to read the orange newspaper’s story.) The write up states:

Singapore will be “brutal and unrelentingly hard” on bad behavior in the crypto industry, according to its fintech policy chief, marking a stark shift in rhetoric after years of the city-state courting the sector.

The report suggests that Singapore sees value in a central bank digital currency and a “platform” for financial activities.

From my perspective, [a] Singapore understands the potential upsides and downsides of crypto currency and wants to be a player, [b] Singapore sees a void because certain leading nation states are dithering, and [c] there’s money to be made.

Money, control, and filling a void — Good reasons perhaps.

Stephen E Arnold, June xx, 2022

Google Takes Bullets about Its Smart Software

June 23, 2022

Google continues it push to the top of the PR totem pole. “Google’s AI Isn’t Sentient, But It Is Biased and Terrible” is in some ways a quite surprising write up. The hostility seeps from the spaces between the words. Not since the Khashoggi diatribes have “real news” people been as focused on the shortcomings of the online ad giant.

The write up states:

But rather than focus on the various well-documented ways that algorithmic systems perpetuate bias and discrimination, the latest fixation for some in Silicon Valley has been the ominous and highly controversial idea that advanced language-based AI has achieved sentience.

I like the fact that the fixation is nested beneath the clumsy and embarrassing (and possibly actionable) termination of some of the smart software professionals.

The write up points out that the Google “distanced itself” from the assertion that Alphabet Google YouTube DeepMind’s (AGYT) is smart like a seven year old. (Aren’t crows supposed to be as smart as a seven year old?)

I noted this statement:

The ensuing debate on social media led several prominent AI researchers to criticize the ‘super intelligent AI’ discourse as intellectual hand-waving.

Yeah, but what does one expect from the outfit which wants to solve death? Quantum supremacy or “hand waving”?

The write up concludes:

Conversely, concerns over AI bias are very much grounded in real-world harms. Over the last few years, Google has fired multiple prominent AI ethics researchers after internal discord over the impacts of machine learning systems, including Gebru and Mitchell. So it makes sense that, to many AI experts, the discussion on spooky sentient chatbots feels masturbatory and overwrought—especially since it proves exactly what Gebru and her colleagues had tried to warn us about.

What do I make of this Google AI PR magnet?

Who said, “Any publicity is good publicity?” Was it Dr. Gebru? Dr. Jeff Dean? Dr. Ré?

Stephen E Arnold, June 23, 2022

Next Page »

  • Archives

  • Recent Posts

  • Meta