Spicing Up Possibly Biased Algorithms with Wiener Math

June 27, 2022

Let’s assume that the model described in “The Mathematics of Human Behavior: How My New Model Can Spot Liars and Counter Disinformation” is excellent. Let’s further assume that it generates “reliable” outputs which correspond to what humanoids do in real life. A final building block is to use additional predictive analytics to process the outputs of the Wiener-esque model and pipe them into an online advertising system like Apple’s, Facebook’s, Google’s, or TikTok’s.

This sounds like a useful thought experiment.

Consider this statement from the cited article:

In this new “information-based” approach, the behavior of a person – or group of people – over time is deduced by modeling the flow of information. So, for example, it is possible to ask what will happen to an election result (the likelihood of a percentage swing) if there is “fake news” of a given magnitude and frequency in circulation. But perhaps most unexpected are the deep insights we can glean into the human decision-making process. We now understand, for instance, that one of the key traits of the Bayes updating is that every alternative, whether it is the right one or not, can strongly influence the way we behave.

These statements suggest that the outputs can be used for different use cases.

Now how will this new model affect online advertising and in a larger context how will the model allows humanoid thoughts and actions to be shaped or weaponized. My initial ideas are:

  1. Feedback signals about content which does not advance an agenda. The idea is that that “flagged” content object never is available to an online user. Is this a more effective form of filtering? I think dynamic pre-filtering is a winner for some.
  2. Filtered content can be weaponized to advance a particular line of thought. The metaphor is that a protective mother does not allow the golden child to play outside at dusk without appropriate supervision. The golden child gleams in the gloaming and learns to avoid risky behaviors unless an appropriate guardian (maybe a Musk Optimus) is shadowing the golden child.
  3. Ads can be matched against what the Amazon, Apple, Facebook, Google, and TikTok systems have identified as appropriate. The resulting ads generated by combining the proprietary methods with those described in the write up increase the close rate by a positive amount.
  4. Use cases for law enforcement exist as well.

Exciting opportunities abound. Once again, I am glad I am old. Were he alive, Norbert Wiener might share my “glad I am old” notion when confronted with applied Wiener math.

Stephen E Arnold, June 26, 2022

DarkCyber, March 29, 2022: An Interview with Chris Westphal, DataWalk

March 29, 2022

Chris Westphal is the Chief Analytics Officer of DataWalk, a firm providing an investigative and analysis tool to commercial and government organizations. The 12-minute interview covers DataWalk’s unique capabilities, its data and information resources, and the firm’s workflow functionality. The video can be viewed on YouTube at this location.

Stephen E Arnold, March 29, 2022

American Airlines Scores Points on the Guy

January 24, 2022

I read “American Airlines Suing the Points Guy Over App That Synchs Frequent Flyer Data.” I have tried to avoid flying. Too many hassles for my assorted Real ID cards, my US government super ID, and passengers who won’t follow rules as wonky as some may be.

The write up focuses on a travel tips sites which “lets users track airline miles from multiple airlines in one place.” The article is interesting and includes some interesting information; for example, consider this statement in the write up:

“Consumers are always in control of their own data on The Points Guy App — they decide which loyalty programs and credit cards are accessible for the purpose of making their points-and-miles journey easier,” The Points Guy founder Brian Kelly said in a statement emailed to The Verge. The site is “choosing to fight back against American Airlines on behalf of travelers to protect their rights to access their points and miles so they can travel smarter,” he added.

The write up includes a legal document in one of those helpful formats which make access on a mobile device pretty challenging for a 77 year old.

As wonderful as the write up is, I noticed one point (not the Guy’s) I would have surfaced; namely, “Why is it okay for big companies to federate and data mine user information but not okay for an individual consumer/customer?”

The reason? We are running the show. Get with it or get off and lose your points. Got that, Points Guy?

Stephen E Arnold, January 24, 2022

New Search Platform Focuses on Protecting Intellectual Property

January 21, 2022

Here is a startup offering a new search engine, now in beta. Huski uses AI to help companies big and small reveal anyone infringing on their intellectual property, be it text or images. It also promises solutions for title optimization and even legal counsel. The platform was developed by a team of startup engineers and intellectual property litigation pros who say they want to support innovative businesses from the planning stage through protection and monitoring. The Technology page describes how the platform works:

“* Image Recognition: Our deep learning-based image recognition algorithm scans millions of product listings online to quickly and accurately find potentially infringing listings with images containing the protected product.

* Natural Language Processing: Our machine learning algorithm detects infringements based on listing information such as price, product description, and customer reviews, while simultaneously improving its accuracy based on patterns it finds among confirmed infringements.

* Largest Knowledge Graph in the Field: Our knowledge graph connects entities such as products, trademarks, and lawsuits in an expansive network. Our AI systems gather data across the web 24/7 so that you can easily base decisions on the most up-to-date information.

* AI-Powered Smart Insights: What does it mean to your brands and listings when a new trademark pops out? How about when a new infringement case pops out? We’ll help you discover the related insights that you may never know otherwise.

* Big Data: All of the above intelligence is being derived from the data universe of the eCommerce, intellectual property, and trademark litigation. Our data engine is the biggest ‘black hole’ in that universe.”

Founder Guan Wang and his team promise a lot here, but only time will tell if they can back it up. Launched in the challenging year of 2020, Huski.ai is based in Silicon Valley but it looks like it does much of its work online. The niche is not without competition, however. Perhaps a Huski will cause the competition to run away?

Cynthia Murrell, January 21, 2021

Palantir at the Intersection of Extremists and Prescription Fraud

January 5, 2022

Blogger Ron Chapman II, ESQ, seems to be quite the fan of Palantir Technologies. We get that impression from his post, “Palantir’s Anti-Terror Tech Used to Fight RX Fraud.” The former Marine fell in love with the company’s tech in Afghanistan, where its analysis of terrorist attack patterns proved effective. We especially enjoyed the rah rah write-up’s line about Palantir’s “success on the battlefield.” Chapman is not the only one enthused about the government-agency darling.

As for Palantir’s move into detecting prescription fraud, we learn the company begins with open-source data from the likes of census data, public and private studies, and Medicare’s Meaningful Use program. Chapman describes the firm’s methodology:

“Palantir then cross-references varying sets of Medicare data to determine which providers statistically deviate from the norm amongst large data sets. For instance, Palantir can analyze prescription data to determine which providers rank the highest in opiate prescribing for a local area. Palantir can then cross-reference those claims against patient location data to determine if the providers’ patients are traveling long distances for opiates. Palantir can further analyze the data to determine if the patient population of a provider has been previously treated by a physician on the Office of Inspector General exclusion database (due to prior misconduct) which would indicate that the patients are not ‘legitimate.’ By using ‘big data’ to determine which providers deviate from statistical trends, Palantir can provide a more accurate basis for a payment audit, generate probable cause for search warrants, or encourage a federal grand jury to further investigate a provider’s activities. After the government obtains additional provider-specific data, Palantir can analyze specific patient files, cell phone data, email correspondence, and electronic discovery. Investigators can review cell phone data and email correspondence to determine if networks exist between providers and patients and determine the existence of a healthcare fraud conspiracy or patient brokering.”

Despite his fondness for Palantir, Chapman does include the obligatory passage on privacy and transparency concerns. He notes that healthcare providers, specifically, are concerned about undue scrutiny should their patient care decisions somehow diverge from a statistical norm. A valid consideration. As with law enforcement, the balance between the good of society and individual rights is a tricky one. Palantir was launched in 2003 by Peter Theil, who was also a cofounder of PayPal and is a notorious figure to some. The company is based in Denver, Colorado.

Cynthia Murrell, January 5, 2022

It Is Official: One Cannot Trust Lawyers Working from Coffee Shops

November 16, 2021

I knew it. I had a hunch that attorneys who work from coffee shops, van life vehicles, and basements were not productive. Billing hours is easy; doing work like reading documents, fiddling with eDiscovery systems, and trying to get Microsoft Word to number lines correctly are harder.

I read “Contract Lawyers Face a Growing Invasion of Surveillance Programs That Monitor Their Work.” The write up points out that what I presume to be GenX, GenY and millennials don’t want to be in a high school detention hall staffed by an angry, game losing basketball coach. No coach, just surveillance software dolled up with facial recognition, “productivity” metrics, and baked in time logging functions.

Here’s a passage I noted:

Contract attorneys such as Anidi [Editor note: a real lawyer I presume] have become some of America’s first test subjects for this enhanced monitoring, and many are reporting frustrating results, saying the glitchy systems make them feel like a disposable cog with little workday privacy.

With some clients pushing back against legal bills which are disconnected from what law firm clients perceive as reality, legal outfits have to get their humanoid resources to “perform”. The monitoring systems allow the complaining client to review outputs from the systems. Ah, ha. We can prove with real data our legal eagles are endlessly circling the client’s legal jungle.

My take is different: I never trusted lawyers. Now lawyers employing lawyers don’t trust these professionals either. That’s why people go to work, have managers who monitor, and keep the professionals from hanging out at the water fountain.

Stephen E Arnold, November 16, 2021

Alphabet Spells Out YouTube Recommendations: Are Some Letters Omitted?

September 23, 2021

I have been taking a look at Snorkel (Stanford AI Labs, open source stuff, and the commercial Snorkel.ai variants). I am a dim wit. It seems to me that Google has found a diving partner and embracing some exotic equipment. The purpose of the Snorkel is to implement smart workflows. These apparently will allow better, faster, and cheaper operations; for example, classifying content for the purpose of training smart software. Are their applications of Snorkel-type thinking to content recommendation systems. Absolutely. Note that subject matter experts and knowledge bases are needed at the outset of setting up a Snorkelized system. Then, the “smarts” are componentized. Future interaction is by “engineers”, who may or may not be subject matter experts. The directed acyclic graphs are obviously “directed.” Sounds super efficient.

Now navigate to “On YouTube’s Recommendation System.” This is a lot of words for a Googler to string together: About 2,500.

Here’s the key passage:

These human evaluations then train our system to model their decisions, and we now scale their assessments to all videos across YouTube.

Now what letters are left out? Maybe the ones that spell built-in biases, stochastic drift, and Timnit Gebru? On the other hand, that could be a “Ré” of hope for cost reduction.

Stephen E Arnold, September 23, 2021

Will Life Become Directed Nudges?

September 17, 2021

I read an article with a thought provoking message. The write up is “Changing Customer Behavior in the Next New Normal.” How do these changes come about? The article is about insurance, which has seemed like a Ponzi pie dolloped with weird assurances when disaster strikes. And when disaster strikes where are the insurance companies? Some work like beavers to avoid fulfilling their end of the financial deal policy holders thought was a sure thing. House burn up in California? Ida nuke your trailer? Yeah, happy customers.

But what’s interesting about the write up is that it advocates manipulation, nudges, and weaponized digital experiences to get people to buy insurance. I learned:

The experience of living through the pandemic has changed the way people live and behave. Changes which offered positive experiences will last longer, especially the ones driven by well-being, convenience, and simplicity. Thereby, digital adoption, value-based personalized purchasing, and increased health awareness will be the customer behaviors that will shape the next new normal. This will be a game-changer for the life insurance industry and provide an opportunity for the industry to think beyond the usual, innovate, and offer granular, value-based and integrated products to meet customer needs. The focus will be on insurance offerings, which will combine risk transfer with proactive and value-added services and emerge as a differentiator.

Not even the murky writing of insurance professionals can completely hide the message. Manipulation is the digital future. If people selling death insurance have figured it out, other business sectors will as well.

The future will be full of directed experiences.

That’s super.

Stephen E Arnold, September 17, 2021

TikTok: Privacy Spotlight

September 15, 2021

There is nothing like rapid EU response to privacy matters. “TikTok Faces Privacy Investigations by EU Watchdog” states:

The watchdog is looking into its processing of children’s personal data, and whether TikTok is in line with EU laws about transferring personal data to other countries, such as China.

The data hoovering capabilities of a TikTok-type app have been known for what — a day or two or a decade? My hunch is that we are leaning toward the multi-year awareness side of the privacy fence. The write up points out:

TikTok said privacy was “our highest priority”.

Plus about a year ago an EU affiliated unit poked into the TikTok privacy matter.

However, the write up fails to reference a brilliant statement by a Swisher-type of thinker. My recollection is that the gist of the analysis of the TikTok privacy issue in the US was, “Hey, no big deal.”

We’ll see. I wait for a report on this topic. Perhaps a TikTok indifferent journalist will make a TikTok summary of the report findings.

Stephen E Arnold, September 15, 2021

Has TikTok Set Off an Another Alarm in Washington, DC?

September 9, 2021

Perhaps TikTok was hoping the recent change to its privacy policy would slip under the radar. The Daily Dot reports that “Senators are ‘Alarmed’ at What TikTok Might Be Doing with your Biometric Data.” The video-sharing platform’s new policy specifies it now “may collect biometric identifiers and biometric information,” like “faceprints and voiceprints.” Why are we not surprised? Two US senators expressed alarm at the new policy which, they emphasize, affects nearly 130 million users while revealing few details. Writer Andrew Wyrich reports,

“That change has sparked Sen. Amy Klobuchar (D-Minn.) and Sen. John Thune (R-S.D.) to ask TikTok for more information on how the app plans to use that data they said they’d begin collecting. Klobuchar and Thune wrote a letter to TikTok earlier this month, which they made public this week. In it, they ask the company to define what constitutes a ‘faceprint’ and a ‘voiceprint’ and how exactly that collected data will be used. They also asked whether that data would be shared with third parties and how long the data will be held by TikTok. … Klobuchar and Thune also asked the company to tell them whether it was collecting biometric data on users under 18 years old; whether it will ‘make any inferences about its users based on faceprints and voiceprints;’ and whether the company would use machine learning to determine a user’s age, gender, race, or ethnicity based on the collected faceprints or voiceprints.”

Our guess is yes to all three, though we are unsure whether the company will admit as much. Nevertheless, the legislators make it clear they expect answers to these questions as well as a list of all entities that will have access to the data. We recommend you do not hold your breath, Senators.

Cynthia Murrell, September 9, 3021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta