DarkCyber, March 29, 2022: An Interview with Chris Westphal, DataWalk

March 29, 2022

Chris Westphal is the Chief Analytics Officer of DataWalk, a firm providing an investigative and analysis tool to commercial and government organizations. The 12-minute interview covers DataWalk’s unique capabilities, its data and information resources, and the firm’s workflow functionality. The video can be viewed on YouTube at this location.

Stephen E Arnold, March 29, 2022

American Airlines Scores Points on the Guy

January 24, 2022

I read “American Airlines Suing the Points Guy Over App That Synchs Frequent Flyer Data.” I have tried to avoid flying. Too many hassles for my assorted Real ID cards, my US government super ID, and passengers who won’t follow rules as wonky as some may be.

The write up focuses on a travel tips sites which “lets users track airline miles from multiple airlines in one place.” The article is interesting and includes some interesting information; for example, consider this statement in the write up:

“Consumers are always in control of their own data on The Points Guy App — they decide which loyalty programs and credit cards are accessible for the purpose of making their points-and-miles journey easier,” The Points Guy founder Brian Kelly said in a statement emailed to The Verge. The site is “choosing to fight back against American Airlines on behalf of travelers to protect their rights to access their points and miles so they can travel smarter,” he added.

The write up includes a legal document in one of those helpful formats which make access on a mobile device pretty challenging for a 77 year old.

As wonderful as the write up is, I noticed one point (not the Guy’s) I would have surfaced; namely, “Why is it okay for big companies to federate and data mine user information but not okay for an individual consumer/customer?”

The reason? We are running the show. Get with it or get off and lose your points. Got that, Points Guy?

Stephen E Arnold, January 24, 2022

New Search Platform Focuses on Protecting Intellectual Property

January 21, 2022

Here is a startup offering a new search engine, now in beta. Huski uses AI to help companies big and small reveal anyone infringing on their intellectual property, be it text or images. It also promises solutions for title optimization and even legal counsel. The platform was developed by a team of startup engineers and intellectual property litigation pros who say they want to support innovative businesses from the planning stage through protection and monitoring. The Technology page describes how the platform works:

“* Image Recognition: Our deep learning-based image recognition algorithm scans millions of product listings online to quickly and accurately find potentially infringing listings with images containing the protected product.

* Natural Language Processing: Our machine learning algorithm detects infringements based on listing information such as price, product description, and customer reviews, while simultaneously improving its accuracy based on patterns it finds among confirmed infringements.

* Largest Knowledge Graph in the Field: Our knowledge graph connects entities such as products, trademarks, and lawsuits in an expansive network. Our AI systems gather data across the web 24/7 so that you can easily base decisions on the most up-to-date information.

* AI-Powered Smart Insights: What does it mean to your brands and listings when a new trademark pops out? How about when a new infringement case pops out? We’ll help you discover the related insights that you may never know otherwise.

* Big Data: All of the above intelligence is being derived from the data universe of the eCommerce, intellectual property, and trademark litigation. Our data engine is the biggest ‘black hole’ in that universe.”

Founder Guan Wang and his team promise a lot here, but only time will tell if they can back it up. Launched in the challenging year of 2020, Huski.ai is based in Silicon Valley but it looks like it does much of its work online. The niche is not without competition, however. Perhaps a Huski will cause the competition to run away?

Cynthia Murrell, January 21, 2021

Palantir at the Intersection of Extremists and Prescription Fraud

January 5, 2022

Blogger Ron Chapman II, ESQ, seems to be quite the fan of Palantir Technologies. We get that impression from his post, “Palantir’s Anti-Terror Tech Used to Fight RX Fraud.” The former Marine fell in love with the company’s tech in Afghanistan, where its analysis of terrorist attack patterns proved effective. We especially enjoyed the rah rah write-up’s line about Palantir’s “success on the battlefield.” Chapman is not the only one enthused about the government-agency darling.

As for Palantir’s move into detecting prescription fraud, we learn the company begins with open-source data from the likes of census data, public and private studies, and Medicare’s Meaningful Use program. Chapman describes the firm’s methodology:

“Palantir then cross-references varying sets of Medicare data to determine which providers statistically deviate from the norm amongst large data sets. For instance, Palantir can analyze prescription data to determine which providers rank the highest in opiate prescribing for a local area. Palantir can then cross-reference those claims against patient location data to determine if the providers’ patients are traveling long distances for opiates. Palantir can further analyze the data to determine if the patient population of a provider has been previously treated by a physician on the Office of Inspector General exclusion database (due to prior misconduct) which would indicate that the patients are not ‘legitimate.’ By using ‘big data’ to determine which providers deviate from statistical trends, Palantir can provide a more accurate basis for a payment audit, generate probable cause for search warrants, or encourage a federal grand jury to further investigate a provider’s activities. After the government obtains additional provider-specific data, Palantir can analyze specific patient files, cell phone data, email correspondence, and electronic discovery. Investigators can review cell phone data and email correspondence to determine if networks exist between providers and patients and determine the existence of a healthcare fraud conspiracy or patient brokering.”

Despite his fondness for Palantir, Chapman does include the obligatory passage on privacy and transparency concerns. He notes that healthcare providers, specifically, are concerned about undue scrutiny should their patient care decisions somehow diverge from a statistical norm. A valid consideration. As with law enforcement, the balance between the good of society and individual rights is a tricky one. Palantir was launched in 2003 by Peter Theil, who was also a cofounder of PayPal and is a notorious figure to some. The company is based in Denver, Colorado.

Cynthia Murrell, January 5, 2022

It Is Official: One Cannot Trust Lawyers Working from Coffee Shops

November 16, 2021

I knew it. I had a hunch that attorneys who work from coffee shops, van life vehicles, and basements were not productive. Billing hours is easy; doing work like reading documents, fiddling with eDiscovery systems, and trying to get Microsoft Word to number lines correctly are harder.

I read “Contract Lawyers Face a Growing Invasion of Surveillance Programs That Monitor Their Work.” The write up points out that what I presume to be GenX, GenY and millennials don’t want to be in a high school detention hall staffed by an angry, game losing basketball coach. No coach, just surveillance software dolled up with facial recognition, “productivity” metrics, and baked in time logging functions.

Here’s a passage I noted:

Contract attorneys such as Anidi [Editor note: a real lawyer I presume] have become some of America’s first test subjects for this enhanced monitoring, and many are reporting frustrating results, saying the glitchy systems make them feel like a disposable cog with little workday privacy.

With some clients pushing back against legal bills which are disconnected from what law firm clients perceive as reality, legal outfits have to get their humanoid resources to “perform”. The monitoring systems allow the complaining client to review outputs from the systems. Ah, ha. We can prove with real data our legal eagles are endlessly circling the client’s legal jungle.

My take is different: I never trusted lawyers. Now lawyers employing lawyers don’t trust these professionals either. That’s why people go to work, have managers who monitor, and keep the professionals from hanging out at the water fountain.

Stephen E Arnold, November 16, 2021

Alphabet Spells Out YouTube Recommendations: Are Some Letters Omitted?

September 23, 2021

I have been taking a look at Snorkel (Stanford AI Labs, open source stuff, and the commercial Snorkel.ai variants). I am a dim wit. It seems to me that Google has found a diving partner and embracing some exotic equipment. The purpose of the Snorkel is to implement smart workflows. These apparently will allow better, faster, and cheaper operations; for example, classifying content for the purpose of training smart software. Are their applications of Snorkel-type thinking to content recommendation systems. Absolutely. Note that subject matter experts and knowledge bases are needed at the outset of setting up a Snorkelized system. Then, the “smarts” are componentized. Future interaction is by “engineers”, who may or may not be subject matter experts. The directed acyclic graphs are obviously “directed.” Sounds super efficient.

Now navigate to “On YouTube’s Recommendation System.” This is a lot of words for a Googler to string together: About 2,500.

Here’s the key passage:

These human evaluations then train our system to model their decisions, and we now scale their assessments to all videos across YouTube.

Now what letters are left out? Maybe the ones that spell built-in biases, stochastic drift, and Timnit Gebru? On the other hand, that could be a “Ré” of hope for cost reduction.

Stephen E Arnold, September 23, 2021

Will Life Become Directed Nudges?

September 17, 2021

I read an article with a thought provoking message. The write up is “Changing Customer Behavior in the Next New Normal.” How do these changes come about? The article is about insurance, which has seemed like a Ponzi pie dolloped with weird assurances when disaster strikes. And when disaster strikes where are the insurance companies? Some work like beavers to avoid fulfilling their end of the financial deal policy holders thought was a sure thing. House burn up in California? Ida nuke your trailer? Yeah, happy customers.

But what’s interesting about the write up is that it advocates manipulation, nudges, and weaponized digital experiences to get people to buy insurance. I learned:

The experience of living through the pandemic has changed the way people live and behave. Changes which offered positive experiences will last longer, especially the ones driven by well-being, convenience, and simplicity. Thereby, digital adoption, value-based personalized purchasing, and increased health awareness will be the customer behaviors that will shape the next new normal. This will be a game-changer for the life insurance industry and provide an opportunity for the industry to think beyond the usual, innovate, and offer granular, value-based and integrated products to meet customer needs. The focus will be on insurance offerings, which will combine risk transfer with proactive and value-added services and emerge as a differentiator.

Not even the murky writing of insurance professionals can completely hide the message. Manipulation is the digital future. If people selling death insurance have figured it out, other business sectors will as well.

The future will be full of directed experiences.

That’s super.

Stephen E Arnold, September 17, 2021

TikTok: Privacy Spotlight

September 15, 2021

There is nothing like rapid EU response to privacy matters. “TikTok Faces Privacy Investigations by EU Watchdog” states:

The watchdog is looking into its processing of children’s personal data, and whether TikTok is in line with EU laws about transferring personal data to other countries, such as China.

The data hoovering capabilities of a TikTok-type app have been known for what — a day or two or a decade? My hunch is that we are leaning toward the multi-year awareness side of the privacy fence. The write up points out:

TikTok said privacy was “our highest priority”.

Plus about a year ago an EU affiliated unit poked into the TikTok privacy matter.

However, the write up fails to reference a brilliant statement by a Swisher-type of thinker. My recollection is that the gist of the analysis of the TikTok privacy issue in the US was, “Hey, no big deal.”

We’ll see. I wait for a report on this topic. Perhaps a TikTok indifferent journalist will make a TikTok summary of the report findings.

Stephen E Arnold, September 15, 2021

Has TikTok Set Off an Another Alarm in Washington, DC?

September 9, 2021

Perhaps TikTok was hoping the recent change to its privacy policy would slip under the radar. The Daily Dot reports that “Senators are ‘Alarmed’ at What TikTok Might Be Doing with your Biometric Data.” The video-sharing platform’s new policy specifies it now “may collect biometric identifiers and biometric information,” like “faceprints and voiceprints.” Why are we not surprised? Two US senators expressed alarm at the new policy which, they emphasize, affects nearly 130 million users while revealing few details. Writer Andrew Wyrich reports,

“That change has sparked Sen. Amy Klobuchar (D-Minn.) and Sen. John Thune (R-S.D.) to ask TikTok for more information on how the app plans to use that data they said they’d begin collecting. Klobuchar and Thune wrote a letter to TikTok earlier this month, which they made public this week. In it, they ask the company to define what constitutes a ‘faceprint’ and a ‘voiceprint’ and how exactly that collected data will be used. They also asked whether that data would be shared with third parties and how long the data will be held by TikTok. … Klobuchar and Thune also asked the company to tell them whether it was collecting biometric data on users under 18 years old; whether it will ‘make any inferences about its users based on faceprints and voiceprints;’ and whether the company would use machine learning to determine a user’s age, gender, race, or ethnicity based on the collected faceprints or voiceprints.”

Our guess is yes to all three, though we are unsure whether the company will admit as much. Nevertheless, the legislators make it clear they expect answers to these questions as well as a list of all entities that will have access to the data. We recommend you do not hold your breath, Senators.

Cynthia Murrell, September 9, 3021

Change Is Coming But What about Un-Change?

September 8, 2021

My research team is working on a short DarkCyber video about automating work processes related to smart software. The idea is that one smart software system can generate an output to update another smart output system. The trend was evident more than a decade ago in the work of Dr. Zbigniew Michalewicz, his son, and collaborators. He is the author of How to Solve It: Modern Heuristics. There were predecessors and today many others following smart approaches to operations for artificial intelligence or what is called by thumbtypers AIOps. The DarkCyber video will become available on October 5, 2021. We’ll try to keep the video peppy because smart software methods are definitely exciting and mostly invisible. And like other embedded components, some of these “modules” will become components, commoditized, and just used “as is.” That’s important because who worries about a component in a larger system? Do you wonder if the microwave is operating at peak efficiency with every component chugging along up to spec? Nope and nope.

I read a wonderful example of Silicon Valley MBA thinking called “It’s Time to Say “Ok, Boomer!” to Old School Change Management.” At first glance, the ideas about efficiency and keeping pace with technical updates make sense. The write up states:

There are a variety of dated methods when it comes to change management. Tl;dr it’s lots of paper and lots of meetings. These practices are widely regarded as effective across the industry, but research shows this is a common delusion and change management itself needs to change.

Hasta la vista Messrs. Drucker and the McKinsey framework.

The write up points out that a solution is at hand:

DevOps teams push lots of changes and this is creating a bottleneck as manual change management processes struggle to keep up. But, the great thing about DevOps is that it solves the problem it creates. One of the key aspects where DevOps can be of great help in change management is in the implementation of compliance. If the old school ways of managing change are too slow why not automate them like everything else? We already do this for building, testing and qualifying, so why not change? We can use the same automation to record change events in real time and implement release controls in the pipelines instead of gluing them on at the end.

Does this seem like circular reasoning?

I want to point out that if one of the automation components operates using probability and the thresholds are incorrect, the data poisoned (corrupted by intent or chance) or the “averaging” which is a feature of some systems triggers a butterfly effect, excitement may ensue. The idea is that a small change may have a large impact downstream; for example, a wing flap in Biloxi could create a flood in the 28th Street Flatiron stop.

Several observations:

  • AIOps are already in operation at outfits like the Google and will be componentized in an AWS-style package
  • Embedded stuff, like popular libraries, are just used and not thought about. The practice brings joy to bad actors who corrupt some library offerings
  • Once a component is up and running and assumed to be okay, those modules themselves resist change. When 20 somethings encounter mainframe code, their surprise is consistent. Are we gonna change this puppy or slap on a wrapper? What’s your answer, gentle reader?

Net net: AIOps sets the stage for more Timnit Gebru shoot outs about bias and discrimination as well as the type of cautions produced by Cathy O’Neil in Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.

Okay, thumbtyper.

Stephen E Arnold, September 8, 2021

Next Page »

  • Archives

  • Recent Posts

  • Meta